51495465738021307345

Mock Tests

Mock exam 1


  1. When IAM policies are being evaluated for their logic of access, which two of the following
    statements are incorrect?

    1. Explicit denies are always overruled by an explicit allow.

    2. The order in which the policies are evaluated does not matter regarding the end result.

    3. Explicit allows are always overruled by an explicit deny.

    4. Access to all resources is denied by default until access is granted.

    5. Access to all resources is allowed by default until access is denied.

  2. Your security team has been tasked with implementing a solution to monitor your EC2 fleet of
    instances. Upon review, you decide to implement Amazon Inspector. What are the three
    prerequisites that you would need to implement before using Amazon Inspector? (Choose three
    answers)

    1. Deploy Amazon Inspector agents to your EC2 fleet.

    2. Create an IAM service-linked role that allows Amazon Inspector to access your EC2 feet.

    3. Create an Assessment Target group for your EC2 fleet.

    4. Deploy an Amazon Inspector log file to your EC2 fleet.

    5. Configure Amazon Inspector so that it runs at the root of your AWS account.

    6. Create an IAM group for your EC2 fleet.


  3. After analyzing VPC flow logs, you notice that restricted network traffic is entering a private
    subnet. After reviewing your
    Network Access Control Lists (NACLs), you verify that a
    custom NACL does exist that should be blocking this restricted traffic. What should you check
    to resolve the issue to ensure that the traffic is blocked at the subnet level?

    1. Check the inbound security group of the instances in the private subnet to ensure it is
      blocking the traffic.

    2. Check to see if the custom NACL has the restrictions associated with the private subnet.

    3. Check your VPC flow log configuration to see if it is configured to block the restricted
      traffic.

    4. Check the Main NACL associated with your VPC to see if it is conflicting with your
      custom NACL.

  4. When using AWS Shield, which type of rule counts the number of requests received from a
    particular IP address over a time period of 5 minutes?

    1. Standard-based

    2. Flow-based

    3. Rate-based

    4. Integer-based

    5. Following a breach on your network, an instance was compromised and you need to perform a
      forensic investigation of the affected instance. You decide to move the EC2 instance to your
      forensic account. Which steps would you take to carry out this process?

      1. Create an AMI from the affected EC2 instance and then share that AMI image with your
        forensic account. From within your forensic account, locate the AMI and create a new
        instance from the shared AMI.

      2. Create an AMI from the affected EC2 instance and then copy that AMI image to your
        forensic account. From within your forensic account, locate the AMI and create a new
        instance from the shared AMI.

      3. Create an EBS snapshot of the affected EC2 instance and then share that snapshot with
        your forensic account. From within your forensic account, launch a new instance and
        create a new volume using the snapshot and attach it to the instance.

      4. Create an EBS snapshot of the affected EC2 instance and then copy that snapshot to your
        forensic account. From within your forensic account, launch a new instance and create a
        new volume using the snapshot and attach it to the instance.


    6. What is the Log Delivery Group account used for within Amazon S3?

      1. This is a customer-defined group that's used to deliver AWS CloudTrail logs to a bucket.

      2. This is a predefined group by AWS that's used to deliver S3 server access logs to a
        bucket.

      3. This is a predefined group by AWS that's used to deliver AWS CloudTrail logs to a
        bucket.

      4. This is a customer-defined group by AWS that's used to deliver S3 server access logs to a
        bucket.

    7. After reviewing the following excerpt from a CloudTrail log, which statement is true?


      "awsRegion": "eu-west-1",

      "eventID": "6ce47c89-5908-452d-87cc-a7c251ac4ac0",
      "eventName": "PutObject",

      "eventSource": "s3.amazonaws.com",
      "eventTime": "2019-11-27T23:54:21Z",

      "eventType": "AwsApiCall",
      "eventVersion": "1.05",
      "readOnly": false,

      "recipientAccountId": "730739171055",
      "requestID": "95BAC3B3C83CCC5D",

      "requestParameters": {

      "bucketName": "cloudtrailpackt",

      "Host": "cloudtrailpackt.s3.eu-west-1.amazonaws.com",

      "key": "Packt/AWSLogs/730739171055/CloudTrail/eu- west-
      1/2019/11/27/730739171055_CloudTrail_eu-west-1_20191127T2321Z_oDOj4tmndoN0pCW3.json.gz",

      "x-amz-acl": "bucket-owner-full-control",
      "x-amz-server-side-encryption": "AES256"


      1. A PutObject operation was performed in the cloudtrailpackt bucket without encryption.

      2. A PutObject operation was performed in the cloudtrailpackt bucket in the eu-west-2 region.

      3. A PutObject operation was performed in account 730739171055 using encryption.

      4. A PutObject operation was performed on 2019-11-27 in the packt bucket using encryption.


    8. You have just joined a new startup organization as a security lead. Processes dictate that all
      your RDS databases must be deployed with Multi-AZ configured. For any new RDS
      deployments, you want to check whether high availability is enabled for your Amazon RDS DB
      instances. What should you configure to ensure that this process is being followed?

      1. Use AWS Config to set up the rds-multi-az compliance check.

      2. Use CloudWatch logs to detect RDS single AZ deployments.

      3. Use CloudTrail logs to search for RDS deployments with the rds-multi-az=false parameter.

      4. Use SNS so that you're emailed every time an RDS single AZ deployment is configured.

    9. Which of the following is NOT considered a security best practice?

      1. Enable Multi-Factor Authentication (MFA).

      2. Remove the root account access keys.

      3. Associate IAM users with a single resource-based policy.

      4. Enable AWS CloudTrail.

    10. You are using the KMS service called encrypt_me to perform encryption within Amazon S3 using
      a customer created CMK in
      eu-west-1. A colleague explains that they are unable to see the CMK
      when they try to use it to encrypt data in a bucket named
      encrypt_me_too in us-east-1. What is the
      most likely cause of this?

      1. Your colleague does not have permission to encrypt with the CMK.

      2. CMKs are regional, so it will not appear in us-east-1.

      3. If a CMK has been used on one bucket, it can't be used on another.

      4. The CMK has become corrupt and it will need to be recreated within KMS.

    11. A developer in your organization requires access to perform cryptographic functions using a
      customer-managed CMK. What do you need to update so that you can add permissions for the
      developer to allow them to use the CMK?

      1. KMS policy.

      2. CMK policy.

      3. Key policy.

      4. Encryption policy.


    12. KMS Key Policies allow you to configure access and the use of the CMKs in a variety of ways.
      Due to this, you can configure access to the CMK in many different ways. Which of the
      following is NOT a method of allowing access?

      1. Via Key Policies – all access is governed by the Key policy alone.

      2. Via Key Policies and IAM – access is governed by the Key policy in addition to IAM
        identity-based policies, allowing you to manage access via groups and other IAM
        features.

      3. Via Key Policies and Grants – access is governed by the Key policy with the added
        ability to delegate access to others so they can use the CMK.

      4. Via Key Policies and IAM Roles – associating the Key policy with the role, thereby
        granting permissions to resources and identities that the role is associated with.

    13. Which is NOT a valid method of S3 encryption?

      1. Server-Side Encryption with S3 Managed Keys (SSE-S3)

      2. Server-Side Encryption with CMK Managed Keys (SSE-CMK)

      3. Server-Side Encryption with KMS Managed Keys (SSE-KMS)

      4. Server-Side Encryption with Customer Managed Keys (SSE-C)

      5. Client-Side Encryption with KMS Managed Keys (CSE-KMS)

      6. Client-Side Encryption with Customer Managed Keys (CSE-C)

    14. Your IAM administrator has created 20 IAM users within your organization's production AWS
      account. All users must be able to access AWS resources using the AWS Management Console,
      in addition to programmatic access via the AWS CLI. Which steps must be implemented to
      allow both methods of access? (Choose two.)

      1. Associate each user with a role that grants permissions that allows programmatic access.

      2. Create a user account with their own IAM credentials and password.

      3. Create an access key and secret access key for every user.

      4. Add the user to the power users group.

      5. Implement Multi Factor Authentication (MFA) for each user and configure their virtual
        MFA device.

    15. You are configuring a number of different service roles to be associated with EC2 instances.
      During the creation of these roles, two components are established: the role itself and one
      other. Which component is also created, following the creation of a service role?

      1. An IAM group that the role is attached to

      2. An instance profile

      3. Temporary instance access keys

      4. A new instance associated with the new service role

    16. Microsoft Active Directory Federation Services (ADFS) can be used as an Identity(IdP) to enable federated access to the AWS Management Console. As part of the
      authentication process, which API is used to request temporary credentials to enable access?

      1. AssumeRoleWithSAML

      2. AssumeIDP

      3. AssumeADFS

      4. AssumeRoleUsingADFS

      5. AssumeFederationRole

    17. When configuring your IdP from within IAM, which document do you need to provide that
      includes the issuer's name, expiration information, and keys that can be used to validate the
      SAML authentication response (assertions) that are received from the IdP?

      1. SAML response document

      2. Metadata document

      3. IDP federation document

      4. IDP document

    18. Your CTO has asked you to find a simple and secure way to perform administrative tasks and
      configurational changes remotely against a selection of EC2 instances within your production
      environment. Which option should you choose?

      1. Use the Run command in AWS Systems Manager.

      2. Use built-in insights in AWS Systems Manager.

      3. Use State Manager in AWS Systems Manager.

      4. Use Session Manager in AWS Systems Manager.


    19. Your organization is running a global retail e-commerce website in which customers from
      around the world search your website, adding products to their shopping cart before ordering
      and paying for the items. During a meeting to redesign the infrastructure, you have been
      instructed to define a solution where routing APIs to microservices can be managed, in addition
      to adding security features so that users can manage authentication and access control and
      monitor all requests that are made from concurrent API calls. Which service should you
      implement to manage these requirements?

      1. Amazon CloudFront

      2. AWS Lambda@Edge

      3. AWS API Gateway

      4. AWS API Manager

      5. AWS Shield

    20. Your organization has been the victim of a massive DDoS attack. You have decided to use the
      AWS
      DDoS Response Team (DRT) for extra support to help you analyze and monitor
      malicious activity within your account. To help the DRT team with your investigations, they
      need access to your AWS WAF rules web ACLs. How can you provide this access?

      1. Using an IAM role with the AWSShieldDRTAccessPolicy managed policy attached, which trusts
        the service principal of
        drt.shield.amazonaws.com to use the role

      2. Using an IAM role with the AWSShieldAccessPolicy managed policy attached, which trusts the
        service principal of
        shield.drt.amazonaws.com to use the role

      3. Using an IAM role with the ShieldDRTAccessPolicy managed policy attached, which trusts the
        service principal of
        drt.shied.amazonaws.com to use the role

      4. Using an IAM role with the AWSShielDRTAccess managed policy attached, which trusts the
        service principal of
        drt.amazonaws.com to use the role

    21. One of your instances within a private subnet of your production network may have been
      compromised. Since you work within the incident team, you have been asked to isolate the
      instance from other resources immediately, without affecting other production EC2 instances in
      the same subnet. Which approaches should be followed in this situation? (Choose two.)

      1. Delete the key pair associated with the EC2 instance.

      2. Remove any role associated with the EC2 instance.

      3. Update the route table of the subnet associated with the EC2 instance to remove the entry
        for the NAT gateway.

      4. Change the security group of the instance to a restricted security group, thereby preventing
        any access to or from the instance.

      5. Move the EC2 instance to the public subnet.

    22. You have implemented a VPN connection between your data center and your AWS VPC. You
      then enabled route propagation to ensure that all the other routes to networks represented
      across your site-to site VPN connection are automatically added within your route table.
      However, you notice that you now have overlapping CIDR blocks between your propagated
      routes and existing static routes. Which statement is true?

      1. The routes will be automatically deleted from your route table as having overlapping
        CIDR blocks is not possible in a route table.

      2. Your static routes will take precedence over propagated routes.

      3. Your propagated routes will take precedence over your static routes.

      4. The longest prefix match will determine which route takes precedence.

    23. Your CTO has explained that they are looking for a solution to be able to monitor network
      packets across your VPC. You suggest VPC flow logs, but the CTO wants to implement a
      solution whereby captured traffic is sent to a Network Load Balancer, using UDP as a listener,
      which sits in front of a fleet of appliances dedicated to network analysis. What solution would
      you suggest to the CTO?

      1. Use the AWS Transit Gateway to capture packets and use the NLB as a Target.

      2. Use Traffic Mirroring to capture packets and use the NLB as a Target.

      3. Use VPC Tunneling to capture packets and use the NLB as a Target.

      4. Use Traffic Capture to capture packets and use the NLB as a Target.

      5. Use VPC Transit to capture packets and use the NLB as a Target.


    24. You have been tasked with defining a central repository that enables you to view real-time
      logging information from different AWS services that can be filtered and queried to search for
      specific events or error codes. Which of the following would you use?

      1. Amazon GuardDuty

      2. Amazon S3 Server Access logs

      3. Amazon Kinesis

      4. Amazon CloudWatch logs

      5. AWS Config logs

    25. Which feature of AWS CloudTrail can be used for forensic investigation to confirm that your
      log files have not been tampered with?

      1. Select Encrypt Log Files with SEE-KMS.

      2. Select Log File Validation.

      3. Select Encrypt Log Validation.

      4. Select Enable Log Tamper Detection.

    26. Which service is being described here? " is a fully managed intelligent threat
      detection service, powered by machine learning, that continually provides insights into unusual
      and/or expected behavioral patterns that could be considered malicious within your account."

      1. AWS Config

      2. Amazon Inspector

      3. AWS Trusted Advisor

      4. Amazon GuardDuty

    27. When it comes to data encryption, it is important to understand the difference between
      asymmetric and symmetric key encryption. Select the statements that are true. (Choose two.)

      1. Symmetric encryption uses a single key to encrypt and decrypt data.

      2. Asymmetric encryption uses a single key to encrypt and decrypt data.

      3. Symmetric encryption keys use two keys to perform the encryption.

      4. Asymmetric encryption keys use two keys to perform the encryption.

    28. You need to encrypt data being stored across your EBS volumes in your VPC with minimal
      management, but you want to be able to audit and track their usage. Which type of AWS KMS
      key will you use?

      1. AWS owned

      2. AWS managed

      3. Customer managed

      4. Customer owned


    29. You have been asked to ensure that your organization's data is encrypted when stored on S3.
      The requirements specify that encryption must happen before the object is uploaded using keys
      managed by AWS. Which S3 encryption option is best suited for this?

      1. SSE-KMS

      2. CSE-KMS

      3. SSE-S3

      4. CSE-C

      5. SSE-C

    30. What is the disadvantage of importing your own key material into a customer-managed CMK?

      1. It does not support automatic key rotation.

      2. It does not support the creation of data encryption keys.

      3. The key material automatically expires after 12 months.

      4. You are unable to define additional key administrators.

    31. When encrypting an EBS group, which kind of keys can be used? (Choose three.)

      1. AWS managed CMK key

      2. AWS owned CMK key

      3. AWS created CMK key

      4. Customer CMK key

      5. Customer DEK key

    32. You have been tasked with granting permissions for your IT corporate workforce of 500+ users
      so that they can access the AWS Management Console to administer and deploy AWS

      resources. Your organization currently uses Microsoft Active Directory (MSAD) to
      authenticate users internally. None of your users currently have IAM user accounts and your
      manager has asked you to configure their AWS access with the least administrative effort.
      Which method would be best?

      1. Create 500 AWS users accounts and assign permissions to each account accordingly.

      2. Configure web identity federation with LDAP, allowing it to query MSAD as your
        authentication into your AWS account. This is used in configuration with AWS roles.

      3. Configure SAML 2.0 federation with LDAP, allowing it to query MSAD as your
        authentication into your AWS account. This is used in conjunction with AWS roles.

      4. Share access keys and secret access keys across your user base, allowing AWS
        Management Console access.

    33. Take a look at the following IAM policy associated with a role. Which statement is true?


      {

      "Version": "2012-10-17",

      "Statement": {
      "Effect": "Allow",

      "Principal": {"AWS": "arn:aws:iam::356903128354:user/Stuart"},
      "Action": "sts:AssumeRole",

      "Condition": {"Bool": {"aws:MultiFactorAuthPresent": "true"}}

      }

      }


      1. The user "Stuart" is denied access to assume the role.

      2. Any users can assume the role if the user has used MFA to verify their credentials.

      3. The role can be assumed for the user "Stuart" if the user uses MFA as an authentication
        method.

      4. The principal is allowed to assume the role using existing permissions granted by MFA.


    34. Which policies do NOT require a principal parameter within the context of the policy?
      (Choose two.)

      1. An Amazon S3 bucket policy.

      2. A key policy within KMS associated with a customer created CMK.

      3. An inline IAM policy.

      4. A service control policy (SCP).

      5. A CloudHSM encryption policy.

    35. You have just joined a new startup as a security engineer. One of your first tasks is to
      implement authentication for a new mobile application that is likely to scale to over a million
      users within the first few months. Which option is the best for handling scaling with minimal
      management?

      1. Implement Amazon Cognito with Enterprise Federation.

      2. Implement Amazon Cognito with SAML Federation.

      3. Implement Amazon Cognito with Social Federation.

      4. Implement Amazon Cognito with Mobile Federation.


    36. Your engineering team has come to you to explain that they have lost the private key associated
      with one of their Linux instance-stored backed root volume EC2 instances, and they can no
      longer connect to and access the instance. Which statement is true in this circumstance?

      1. It is still possible to recover access as it has an instance-stored backed root volume

      2. When you lose your private key to an EC2 instance that has an instance-stored root
        volume, there is no way to reestablish connectivity to the instance

      3. Recreate a new key-pair for the instance using the aws ec2 create-key-pair --key-name
        MyNewKeyPair
        AWS CLI command

      4. Request a replacement private key from AWS using the associated public key

    37. You are explaining the differences between security groups and Network Access Control Lists
      to a customer. What key points are important to understand when understanding how these two
      security controls differ from each other? (Choose three)

      1. Security groups are stateful by design and NACLs are not.

      2. NACLs are stateful by design and security groups are not.

      3. Security groups allow you to add a Deny action within the ruleset.

      4. NACLs allow you to add a Deny action within the ruleset.

      5. Security groups control access at the instance level.

      6. NACLs control access at the instance level.

    38. Your new startup is deploying a highly-scalable multi-tiered application. Your VPC is using
      both public and private subnets, along with an application load balancer. Your CTO has
      defined the following requirements:

      All the EC2 instances must only have a private IP address.
      All EC2 instances must have internet access.


      What configuration is required to meet these requirements? (Choose two.)


      1. A NAT gateway should be deployed in the private subnet.

      2. A NAT gateway should be deployed in the public subnet.

      3. Add a rule to your main route table, directing all outbound traffic via the ALB.

      4. Launch the EC2 instances in the private subnet.

      5. Register EC2 instances with the NAT gateway.


    39. You are experiencing an increase in the level of attacks across multiple different AWS accounts
      against your applications from the internet. This includes XSS and SQL injection attacks. As
      the security architect for your organization, you are responsible for implementing a solution to
      help reduce and minimize these threats. Which AWS services should you implement to help
      protect against these attacks? (Choose two.)

      1. AWS Shield

      2. AWS Firewall Manager

      3. AWS Web Application Firewall

      4. AWS Secrets Manager

      5. AWS Systems Manager

    40. During the deployment of a new application, you are implementing a public-facing Elastic(ELB). Due to the exposed risk, you need to implement encryption across your
      ELB, so you select HTTPS as the protocol listener. During this configuration, you will need to
      select a certificate from a
      certificate authority (CA). Which CA is the recommended choice
      for creating the X.509 certificate?

      1. AWS Certificate Manager within AWS Systems Manager

      2. AWS Certificate Manager

      3. Select a certificate from IAM

      4. AWS Certificate Authority Manager

      5. Certificate Authority Manager within AWS Shield

    41. Recently, you have noticed an increase in the number of DDoS attacks against your public web
      servers. You decide to implement AWS Shield Advanced to help protect your EC2 instances.

      Which configurational change do you need to implement before you can protect your instance
      using the advanced features?

      1. You must assign the EC2 instances within their own Public Shield subnet.

      2. Assign an EIP to the EC2 instance.

      3. Install the CloudFront Logging Agent on the EC2 instances.

      4. Install the SSM Agent on your EC2 instance.


    42. Which layer of the OSI model do both Amazon CloudFront (with AWS WAF) and Route 53
      offer attack mitigation against? (Choose three.)

      1. 2

      2. 3

      3. 4

      4. 5

      5. 6

      6. 7

    43. Looking at the following route table, which target would be selected for a packet being sent to
      a host with the IP address of 172.16.1.34?



      Destination


      Target


      10.0.0.0/16


      Local


      172.16.0.0/16


      pcx-1234abcd


      172.16.1.0/24


      vgw-wxyz6789


      The first route is the local route of the VPC that's found in every route table.
      The second route points to a target related to a VPC peering connection.

      The third route points to a VPN Gateway that then connects to a remote location.


      Your options are as follows:


      1. 10.0.0.0/16

      2. 172.16.0.0/16

      3. 172.16.1.0/24

      4. There is no feasible route


    44. You have just joined a new network team. You are responsible for making configurational
      changes to your Direct Connect infrastructure that connects from your corporate data center to

      your AWS infrastructure. Take a look at the following policy detailing your access. Which
      statement is correct?


      image


      1. You have full access to make configurational changes as required to Direct Connect.

      2. You have read-only access to Direct Connect.

      3. You have full access to configure components related to Direct Connect Describe.

      4. You have read-only access to Direct Connect, but you do have full access to VPN
        Gateways and Transit Gateway configurations.

    45. An engineer has raised a concern regarding one of your buckets and wants to understand details
      about when a particular bucket has been accessed to help ascertain the frequency and by whom.
      Which method would be the MOST appropriate to get the data required?

      1. Analyze AWS CloudTrail log data.

      2. Analyze AWS Config log data.

      3. Analyze S3 Server access logs.

      4. Analyze VPC flow logs.

    46. Amazon S3 object-level logging integrates with which other AWS service?

      1. Amazon CloudWatch

      2. Amazon Glacier

      3. Amazon EC2

      4. AWS Config

      5. AWS CloudTrail


    47. You are currently monitoring the traffic flow between a number of different subnets using VPC
      flow logs. Currently, the configuration of the capture is capturing ALL packets. However, to
      refine the flow log details, you want to modify the configuration of the flow log so that it only
      captures rejected packets instead. Which of the following statements is true?

      1. You can't capture rejected packets in a VPC flow log.

      2. You can't change the configuration of an existing flow log once it's been created.

      3. The VPC flow log can be modified with these changes without any packets being dropped.

      4. The VPC flow log must be stopped before you can make configuration changes.

    48. Your CTO is concerned about the sensitivity of the data being captured by AWS CloudTrail. As
      a result, you suggest encrypting the log files when they are sent to S3. Which encryption
      mechanism is available to you during the configuration of your Trail?

      1. SSE-S3

      2. SSE-KMS

      3. SSE-C

      4. CSE-KMS

      5. CSE-C

    49. As part of your security procedures, you need to ensure that, when using the Elastic File(EFS), you enable encryption-in-transit using TLS as a mount option, which uses a

      client tunnel process. Assuming your filesystem is fs-12345678 and your filesystem's identifier is

      /mnt/efs, which command would you enter to mount the EFS file stems with encryption enabled?

      1. sudo mount -t efs tls fs-12345678: -o / /mnt/efs

      2. sudo mount -t tls efs fs-12345678:/ /mnt/efs

      3. sudo mount -t efs -o tls fs-12345678:/ /mnt/efs

      4. sudo mount -t ssl tls fs-12345678:/ /mnt/efs


    50. You are configuring your AWS environment in preparation for downloading and installing the
      CloudWatch agent to offer additional monitoring. Which two tasks should you complete prior to
      installing the agent?

      1. Ensure that your EC2 instance is running the latest version of the SSM agent.

      2. Ensure that your EC2 instances have outbound internet access.

      3. Ensure that your EC2 instances all have the same tags.

      4. Ensure that any public EC2 instances are configured with an ENI.

      5. Ensure CloudWatch is configured for CloudWatch logging in your region.

    51. You have been approached by your compliance team to define what data is encrypted on an
      EBS volume when EBS encryption has been enabled. Which of the following should you
      choose? (Choose three.)

      1. The root and data volume

      2. Just the data volume

      3. All data moving between the EBS volume and the associated EC2 instance

      4. All snapshots of the EBS volume

      5. Just the root volume

      6. The ephemeral volume associated with the EC2 instances

    52. You are being audited by an external auditor against PCI-DSS, who is accessing your solutions
      that utilize AWS. You have been asked to provide evidence that certain controls are being met
      against infrastructure that is maintained by AWS. What is the best way to provide this
      evidence?

      1. Contact your AWS account management team, asking them to speak with the auditor.

      2. As a customer, you have no control over the AWS infrastructure or if it meets certain
        compliance programs.

      3. Use AWS Auditing to download the appropriate compliance reports.

      4. Use AWS Artifact to download the appropriate compliance records.

    53. Which part of AWS CloudHSM can carry out the following functions?
      Perform encryption and decryption.

      Create, delete, wrap, unwrap, and modify attributes of keys.
      Sign and verify.

      Generate digests and HMACs.

      Your options are as follows:


      1. Crypto Office (CO)

      2. Crypto User (CU)

      3. Precrypto Office (PRECO)

      4. Appliance User (AU)


    54. You have a VPC without any EC2 instances, and for security reasons, this VPC must never have
      any EC2 instances running. If an EC2 instance is created, it would create a security breach.
      What could you implement to automatically detect if an EC2 instance is launched and then
      notify you of that resource?

      1. Use AWS CloudTrail to capture the launch of an EC2 instance, with Amazon SNS
        configure as a target for notification.

      2. Use CloudWatch Events to detect the launch of an EC2 instance, with Amazon SNS
        configured as a target for notification.

      3. Use AWS GuardDuty to detect the launch of an EC2 instance, with an AWS Lambda
        function configured as a target for notification.

      4. Use AWS Systems Manager to detect the launch of an EC2 instance, with Amazon SNS
        configured as a target for notification.

    55. Which AWS CloudHSM user contains a default username and password when you first
      configure your CloudHSM?

      1. Crypto Office

      2. Crypto User

      3. Precrypto Office

      4. Appliance User

    56. Amazon GuardDuty uses different logs to process and analyze millions of events that are then
      referenced against numerous threat detection feeds, many of which contain known sources of
      malicious activity, including specific URLs and IP addresses. Which of the following logs are
      NOT used by Amazon GuardDuty? (Choose two.)

      1. VPC flow logs

      2. S3 Server Access logs

      3. DNS logs

      4. CloudTrail logs

      5. CloudWatch Event logs


    57. Which statement is true about a KMS key policy?

      1. It is an identity-based policy.

      2. It is a resource-based policy.

      3. You can only apply the resource using an IAM role.

      4. The same policy can be attached to multiple KMS keys in the same region.

    58. You have just joined a company working within the security team that are utilizing third-party
      tools such as Sumo Logic and Splunk, in addition to a number of AWS security services,
      including AWS IAM and Firewall Manager. Your manager has asked you to review solutions in
      order to centralize findings from all toolsets and services. Which of the following solutions
      would you recommend?

      1. AWS Detector

      2. Amazon Macie

      3. Amazon GuardDuty

      4. Amazon Inspector

      5. AWS Security Hub

    59. You have been asked to upload the company's own key material instead of using the key
      material generated by KMS. In preparation for doing this, you download the public key and
      import token. What format must your key material be in prior to it being uploaded?

      1. JSON

      2. Binary

      3. TAR

      4. TIFF

    60. When configuring your access policies within IAM, what should you always consider as a
      security best practice?

      1. Always add an implicit "Deny" at the end of the policy statement.

      2. Implement the principle of least privilege (PoLP).

      3. Only add a single statement within a policy.

      4. Implement identity-based policies instead of resource-based policies.

    61. Which of the following is NOT considered an asymmetric key encryption mechanism?

      1. Diffie-Hellman

      2. Advanced Encryption Standard (AES)

      3. Digital Signature Algorithm

      4. RSA


    62. AWS Trusted Advisor helps customers optimize their AWS environment through recommended
      best practices. Which of the following is NOT one of the five categories that it checks in your
      account?

      1. Cost Optimization

      2. Monitoring

      3. Performance

      4. Security

      5. Fault Tolerance

      6. Service Limits

    63. Which of the following keys shows an AWS managed key when using Amazon S3 SSE-KMS?

      1. aws/s3

      2. aws/kms/s3

      3. s3/kms

      4. kms/s3

    64. Which keys used in conjunction with KMS are used outside of the KMS platform to perform
      encryption against your data?

      1. Customer master key

      2. Data encryption key

      3. Data decryption key

      4. Customer data encryption key

    65. Your organization is storing some sensitive data on Amazon S3. Using encryption, you have
      implemented a level of protection across this data. The encryption method you used was SSE-
      S3. Which type of key does this use?

      1. AWS owned

      2. AWS managed

      3. Customer managed

      4. Customer owned

Answers



1: 1,5


11: 3


21: 2,4


31: 1,2,4


41: 2


51: 1,3,4


61: 2


2: 1,2,3


12: 4


22: 2


32: 3


42: 2,3,6


52: 4


62: 2


3:


13: 2


23: 2


33: 3


43: 3


53: 2


63: 1


4: 3


14: 2,3


24: 4


34: 3,4


44: 2


54: 2


64: 2


5: 1


15: 2


25: 2


35: 3


45: 3


55: 3


65: 1


6: 2


16: 1


26: 4


36: 2


46: 5


56: 2,5



7: 3


17: 2


27: 1,4


37: 1,4,5


47: 2


57: 2



8: 1


18: 1


28: 2


38: 2,4


48: 2


58: 5



9: 3


19: 3


29: 2


39: 2,3


49: 3


59: 2



10: 2


20: 1


30: 1


40: 2


50: 1,2


60: 2


Mock exam 2

  1. New security policies state that specific IAM users require a higher level of authentication due
    to their enhanced level of permissions. Acting as the company's security administrator, what
    could you introduce to follow these new corporate guidelines?

    1. MFA

    2. TLS

    3. SSL

    4. SNS

    5. SQS

  2. You have tried to configure your VPC with multiple subnets: a single public subnet and
    multiple private subnets. You have created an
    Internet Gateway (IGW) and are trying to
    update the route table associated with your subnet that you want to act as a public subnet as you
    wish this to point to the IGW as the target. However, you are unable to see the IGW. What is the
    most likely cause of this problem?

    1. You do not have permission to view IGWs.

    2. You have not associated the IGW with your region.

    3. You have not associated the IGW with your VPC.

    4. You have not associated the IGW with your subnet.


  3. Your operations team is using AWS WAF to protect your CloudFront distributions. As part of
    configuring the web ACLs, the team is adding multiple condition statements to a single rule.
    Which three statements are true when combining statements within one rule?

    1. The conditions are ANDed together.

    2. All conditions must be met for the rule to be effective.

    3. If one condition is met the rule is effective.

    4. AWS WAF will not allow you to add multiple conditions to a single rule.

    5. Only one action can be applied to the rule.

  4. You currently have a multi-account AWS environment that focuses heavily on web applications.
    As part of your security measures, you are looking to implement an advanced level of DDoS
    protection across all accounts. How would you implement a solution with cost optimization in
    mind that offers DDoS protection across all accounts?

    1. Activate AWS Shield Advanced on each AWS account.

    2. Activate AWS Shield Advanced on one account and set up VPC peering for all the other
      accounts.

    3. Configure consolidated billing for all the accounts and activate AWS Shield Advanced in
      each account.

    4. Configure AWS Security Hub to manage each account and activate AWS Shield Advanced
      within AWS Security Hub.

    5. Your engineering team is trying to configure Amazon S3 server access logging. They want to
      use a source bucket named
      MyBucket within account A in eu-west-2, with a target bucket named
      MyTarget in account B in eu-west-2. However, they are not able to configure access logging. What
      is the most logical reason for this?

      1. The engineering team does not have cross-account access to the buckets.

      2. The source and target buckets need to be in the same account.

      3. The bucket permissions are restricting the engineering team's access.

      4. The source and target buckets need to be in different regions.

    6. How can you enhance the security of your AWS CloudTrail logs? (Choose two.)

      1. Encrypt log files using CSE-KMS.

      2. Enable log file verification .

      3. Encrypt log files using SSE-KMS.

      4. Enable log file validation.


    7. As the IAM administrator, you have been asked to create a new role to allow an existing fleet
      of EC2 instances to access Amazon S3 directly with
      PutObject and GetObject permissions. Which
      of the following roles types would you create to do this?

      1. Another AWS account

      2. Web Identity

      3. SAML 2.0 Federation

      4. AWS Service

      5. Service Integration

    8. You have been asked to assess your fleet of EC2 instances for security weaknesses while the
      instances are in operational use. Which of the following rule packages that can be used within
      Amazon Inspector would you recommend to run?

      1. Center for Internet Security (CIS) benchmarks

      2. Common Vulnerabilities and Exposures (CVEs)

      3. Security best practices

      4. Runtime behavior analysis

      5. Network reachability

    9. Which of the following resources within your environment can be protected by the AWS Web
      Application Firewall service? (Choose three.)

      1. Amazon EC2

      2. Network Load Balancer

      3. Application Load Balancer

      4. API Gateway

      5. AWS NAT gateway

      6. Amazon CloudFront Distributions

    10. You have configured some AWS VPC flow logs so that they capture network traffic across your
      infrastructure. Which of the following options are available as destinations that store the
      captured VPC flow logs? (Choose two.)

      1. Amazon S3 Bucket

      2. AWS Config

      3. Amazon Macie

      4. AWS Security Hub

      5. Kinesis Stream

      6. CloudWatch logs


    11. Which AWS support plans provide the full capabilities of AWS Trusted within your AWS
      account? (Choose two.)

      1. Business

      2. Developer

      3. Basic

      4. Enterprise

      5. Corporate

    12. Which of the following policies governs the maximum permissions that an identity-based
      policy can associate with any user or role, but does not apply permissions to users or roles
      themselves?

      1. Resource-based policies

      2. Organization Service Control Policies

      3. ACLs

      4. Permission boundaries

    13. One of the subnets within your VPC is configured with the following NACL:

      image


      An instance in the subnet is configured with the following security group:


      image


      Which of the following connections would be allowed?


      1. A host with an IP address of 86.171.161.10 trying to SSH to your EC2 instance

      2. An engineer using the source IP address of 86.171.161.10 trying to RDP to the EC2
        instance

      3. If anyone, anywhere, was trying to use HTTP to get to the EC2 instance
        Your options are as follows:

        1. 1 and 2

        2. 1, 2, and 3

        3. 3

        1. 2 and 3

        2. 1 and 3


    14. Which of the following services would fall under the abstract part of the Shared Responsibility
      Model? (Choose two.)

      1. Amazon Simple Queue Service (SQS)

      2. Amazon Elastic Compute Cloud (EC2)

      3. Amazon Simple Storage Service (S3)

      4. Amazon DynamoDB

      5. Amazon Relational Database Service

    15. The following AWS Organizations SCP is in place for your account:


      {

      "Version": "2012-10-17",

      "Statement": [

      {

      "Sid": "SCPPolicy",

      "Effect": "Deny",
      "Action": [

      "iam:AttachRolePolicy",
      "iam:DeleteRole",
      "iam:DeleteRolePermissionsBoundary",
      "iam:DeleteRolePolicy",
      "iam:DetachRolePolicy",
      "iam:PutRolePermissionsBoundary",

      "iam:PutRolePolicy",
      "iam:UpdateAssumeRolePolicy",
      "iam:UpdateRole",
      "iam:UpdateRoleDescription"

      ],

      "Resource": [
      "arn:aws:iam::*:role/
      IAM-Packt"

      ]

      }

      ]

      }


      Which statements are true? (Choose twoo)


      1. All access is denied to delete all IAM roles.

      2. All access is denied to update the IAM-Packt role.

      3. All access is denied to assume the IAM-Packt role.

      4. All access is denied to DetachRolePolicy for all roles.

      5. All access is denied to DeleteRolePermissionsBoundary for the IAM-Packt role.


    16. You currently have a number of resources based within your corporate data center and you also
      utilize some AWS resources within a VPC. Over the coming months, you are looking to
      incorporate more of your on-premise solutions with the cloud. From a security perspective,
      your CTO wants to implement a more reliable and secure method of connecting to your VPC.
      Which connectivity methods would you recommend in order to maintain a higher level of
      security? (Choose two)

      1. Virtual Private Gateway

      2. Virtual Private Network

      3. Direct Connect

      4. Connect Direct

      5. Customer Private Gateway

    17. Your company is looking to implement a link to AWS using AWS Direct Connect as the
      solutions architect. You explain that there are a number of prerequisites that need to be met
      from your own internal network. Which of the following is NOT a prerequisite for Direct
      Connect?

      1. For authentication, your router must support both BGP and BGP MD5 authentication.

      2. Your network infrastructure MUST use single-mode fiber.

      3. The port on your device must have automatically configured speed and half-duplex mode
        enabled.

      4. You must ensure that you have 802.1Q VLAN encapsulation support across your network
        infrastructure.

    18. You have configured AWS Config rules to implement another level of compliance check. Your
      s3-bucket-server-side-encryption-enabled check has found five non-compliant resources. What action
      is taken by AWS Config?

      1. The default Amazon S3 encryption method is automatically applied to the non-compliant
        bucket.

      2. No further objects will be allowed to be saved in this bucket until the non-compliance
        associated with the bucket has been made compliant.

      3. No action will be taken; the non-compliance is for informational purposes.

      4. Objects in the non-compliant bucket will be moved to a different storage class.

    19. You have been asked to present an AWS security introduction course to some of the business
      managers in your organization. As part of this process, you are going to explain the AWS
      Shared Responsibility Model. Currently, your organization works heavily with AWS
      Elastic(EMR), AWS Relational Database Service (RDS), and AWS Elastic Beanstalk(EB), so you will be focusing on the model that represents these services the most. Out of the
      different models, which of these services fit into them the best?

      1. Infrastructure

      2. Container

      3. Abstract

      4. Platform

    20. Which statements are true regarding Amazon EC2 Key Pairs? (Choose three.)

      1. Key pairs use symmetric cryptography.

      2. Key pairs use public-key cryptography,

      3. The public key is maintained by the customer and must be downloaded,

      4. The public key encrypts the credentials.

      5. The private key decrypts credentials.

    21. Which component of AWS Systems Manager can help you gain an overview of how the
      resources within your resource groups are operating and integrating with the following:

      AWS Config
      CloudTrail

      Personal Health Dashboard
      Trusted Advisor


      Your options are as follows:


      1. Resource Groups

      2. Run Command

      3. Built-in Insights

      4. State Manager

      5. Session Manager


    22. When implementing a VPN connection between your corporate network and your AWS VPC,
      which components are essential to establishing a secure connection? (Choose two.)

      1. A VPN Gateway attached to your AWS architecture

      2. A Customer Gateway attached to your AWS architecture

      3. A Private Gateway attached to your AWS architecture

      4. A VPN Gateway attached to your corporate network

      5. A Customer Gateway attached to your corporate network

      6. A Private Gateway attached to your corporate network

    23. AWS Trusted Advisor provides a "Service Limit" category. This category checks whether any
      of your services have reached a certain percentage or more against the allotted service limit.
      What is the percentage set at before an alert is triggered?

      1. 70%

      2. 75%

      3. 80%

      4. 85%

    24. You are looking to implement AWS Firewall Manager within your organization as a way to
      manage your WebACL across multiple AWS accounts. As a prerequisite to using this service,

      you have enabled AWS Config. What two other prerequisites must be met before you can use
      AWS Firewall Manager?

      1. Enable CloudTrail logs.

      2. Add your AWS account to an AWS organization that has ALL features enabled.

      3. Add your AWS account to an AWS organization that has consolidated billing enabled
        ONLY.

      4. Select your primary account to act as the Firewall Manager Administrative account

      5. Enable AWS Shield across all AWS accounts.


    25. What is the recommended running time for an AWS Amazon Inspector assessment?

      1. 1 hour

      2. 6 hours

      3. 12 hours

      4. 24 hours

    26. You have just updated your KMS Key policy for one of your customer-managed CMKs. Within
      the Sid Allow access for Key Administrators section, you added the principal ARN of two of
      your engineers to maintain the same access as other key administrators. However, they
      complain, explaining that they are unable to use the CMK to perform cryptographic operations.
      What is the cause of this?

      1. The CMK is configured with kms:encrypt -deny.

      2. Key administrators are not able to use the CMK for cryptographic operations.

      3. The role associated with the engineers prevents the users from using KMS.

      4. You need to update the encryption policy for the CMK in the same region to provide
        access.

    27. Which of the following are NOT actions that can be set within an AWS Web Application
      Firewall rule? (Choose two.)

      1. Reject

      2. Allow

      3. Deny

      4. Block

      5. Count

    28. When using social federated access, any IdP that is OpenID Connect (OIDC) compatible can
      be used for authentication. Which of the following is not used for social federation?

      1. ADFS

      2. Facebook

      3. Amazon

      4. Google

    29. Which of the following security policies are NOT written in JSON format?

      1. AWS IAM identity-based policies

      2. AWS KMS key policies

      3. AWS Organizational Service Control Policies

      4. AWS Amazon S3 ACLs


    30. You have configured Amazon Inspector to run all the rules packages against your fleet of EC2
      instances, which are running on both Linux-based and Windows operating systems. After
      examining the findings, you notice that there are no findings for Windows-based operating
      systems for the "Security Best Practices" rules package. What could be the explanation for this?

      1. The Security Best Practices rules package only discovers Linux-based operating systems.

      2. There were no issues found with the Windows-based EC2 instances.

      3. The Amazon Inspector agent on the Windows-based OS was not configured to detect this
        rules package.

      4. The role associated with Amazon Inspector did not permit this level of access.

    31. You have configured a bastion host within the public subnet of your VPC. To connect to your
      Linux instances in your private subnet, you need to use the private key that is not currently
      stored on the bastion host. What method of connectivity can you use to gain access to the Linux
      instance?

      1. Copy the *.pem file from your localhost to your bastion host and then connect to your Linux
        instance.

      2. Use SSH forwarding.

      3. Connect to your bastion using SSL to encrypt the *.pem file, then connect to your Linux
        instance using the encrypted
        *.pem file.

      4. Use AWS Secrets Manager to maintain the *.pem files and call it using an API via the
        bastion host while it's connecting to your Linux instance.

    32. You need to retrieve a secret stored in AWS Secrets Manager to gain access to an RDS
      database. You do not have access to the AWS Management Console, so you need to retrieve it
      programmatically. Which command should you use for this when using the AWS CLI?

      1. get-secret-value-rds

      2. get-rds-secret-value

      3. get-rds-value

      4. get-secret-value


    33. Which of the following services and features of AWS do NOT offer DDoS protection or
      mitigation? (Choose one.)

      1. AWS CloudTrail

      2. Application Load Balancer

      3. Amazon CloudFront

      4. Amazon Route 53

      5. AWS WAF

    34. To provide a single-pane-of-glass approach to the security notifications across your accounts,
      your organization has decided to implement AWS Security Hub. The first step of activating this
      service requires you to select a security standard. Which standards are available for you to
      select? (Choose two.)

      1. CIS AWS Foundations Benchmark

      2. PCI DSS

      3. ISO

      4. FedRamp

      5. SOC 2

    35. To simplify authentication to specific AWS resources, you have decided to implement Web
      Identity Federation. Prior to configuration, what information do you need to obtain from the
      IdP first?

      1. Federated Sequence ID

      2. Federation Number

      3. Application ID/Audience

      4. Application Notice

    36. Which AWS VPC secure networking component is being described here?

      “A hardened EC2 instance with restrictive controls that acts as an ingress gateway between the internet and your private
      subnets without directly exchanging packets between the two environments.”


      1. Bastion Host

      2. NAT gateway

      3. NAT Instance

      4. Internet Gateway


    37. When trying to protect web applications, there are many different attacks that can be
      experienced, as explained within the OWASP top 10. Which type of attack is being described
      here?


      “These are malicious scripts that are embedded in seemingly trusted web pages that the browser then executes. This can then
      allow a malicious attacker to gain access to any sensitive client-side data, such as cookie information.”


      1. SQL injection attack

      2. String and regex matching

      3. Cross-Site Scripting (XSS)

      4. Broken access control

    38. One of the key components of Amazon Macie is how it classifies data to help determine its
      level of sensitivity and criticality to your business through a series of automatic content
      classification mechanisms. It performs its classification using the object-level API data events
      it collated from CloudTrail logs. Currently, there are five levels of classification, but one of
      them is hidden from the console. Which one?

      1. Content type

      2. Support vector machine-based

      3. Theme

      4. File extension

      5. Regex

    39. Using Amazon Macie, you need to classify your S3 data based on a list of predefined keywords
      that exist within the actual content of the object being stored. What would be the best content
      classification type to use to capture this information?

      1. Theme

      2. File Extension

      3. Regex

      4. Content type


    40. When working with cross-account access, you must configure a Trusting account and a Trusted
      account. A user, "Stuart", in account A needs to gain access to an Amazon RDS database in
      account B. To configure access, cross-account access needs to be configured. Which steps need
      to take place? (Choose two.)

      1. From the Trusting account, create a cross-account access role.

      2. From the Trusted account, create a cross-account access role.

      3. Create a policy to assume the role in the Trusted account.

      4. Create a policy to assume the role in the Trusting account.

    41. You are responsible for designing security solutions for protecting web applications using
      AWS Web Application Firewall. During a meeting with senior management, you are asked to
      highlight the core elements that construct the service. Which components would you highlight to
      the team? (Choose three.)

      1. Conditions

      2. Values

      3. Rules

      4. Web ACLs

      5. Thresholds

    42. Which of the following traffic types are NOT captured by VPC flow logs?

      1. Ingress traffic to private subnets

      2. Egress traffic from public subnets

      3. Traffic to the reserved IP address for the default VPC router

      4. Traffic to the private IPv4 address of a NAT gateway

    43. Which is NOT a method of installing the Amazon Inspector agent?

      1. A manual install via a script being run on the instance

      2. Using the Run command from within System Manager

      3. Installing the agent as a part of the initial assessment when defining your target

      4. Using an Amazon AMI that already has the agent installed

      5. Using the Deploy command from AWS Security Hub

    44. Amazon GuardDuty has the ability to perform remediation of findings through automation.
      Which AWS service or feature does GuardDuty integrate with to allow this?

      1. AWS Security Hub

      2. AWS CloudWatch Events

      3. AWS CloudTrail

      4. AWS KMS


    45. Your organization requires the use of MFA, but virtual MFA devices are not allowed. What
      other device options could you use? (Choose two.)

      1. U2F Security Keys

      2. Gemalto Token

      3. CMK keys

      4. SCP Token

      5. GuardDuty Security Keys

    46. When AWS evaluates the permissions of an IAM user, a level of policy evaluation logic is
      applied to determine their resulting permission level. Which order are policies evaluated in?

      1. Resource-based, Identity-based, IAM Permission boundaries, and SCPs

      2. IAM Permission boundaries, Identity-based, Resource-based, and SCPs

      3. Identity-based, Resource-based, IAM Permission boundaries, and SCPs

      4. SCPs, Identity-based, Resource-based, and IAM Permission boundaries

    47. Your systems engineers explain that they have deleted a key pair from the EC2 management
      console. However, they can still connect to EC2 instances that had this key pair associated with
      the instance. They are confused as to how this connectivity is still possible, even though the key
      pair was deleted. What explanation do you give them?

      1. When you delete a key pair from the EC2 Management Console, it will automatically
        reinstate it if AWS detects it is currently associated with existing EC2 instances to
        maintain connectivity.

      2. When you delete a key pair from the EC2 Management Console, it just deletes the copy of
        the public key that AWS holds; it does not delete the public keys that are attached to
        existing EC2 instances.

      3. When you delete a key pair from the EC2 Management Console, it removes the associated
        public key from the EC2 instance. It also allows open access until you create another key
        pair to associate with the instance.

      4. When you attempt to delete an active key pair from the EC2 Management Console, it is
        marked with a "hidden" tag, but NOT deleted. Only inactive key pairs are removed from
        the console.

    48. As the lead security engineer, you have been asked to review how credentials associated with
      your RDS databases are managed and ensure there are no details hardcoded within your
      processes and applications. You need to implement a solution that offers greater protection that
      also enables the automatic rotation of credentials. Which services would you be using within
      your solution?

      1. AWS Security Hub with AWS KMS integration

      2. AWS Config with AWS Lambda and AWS KMS integration

      3. AWS Trusted Advisor with AWS KMS integration

      4. AWS Security Systems Manager with AWS Lambda integration

      5. AWS Secrets Manager with AWS KMS and AWS Lambda integration

    49. S3 object-level logging integrates with which other AWS service component to record both
      read and write API activity?

      1. AWS CloudWatch Events

      2. AWS CloudTrail Data events

      3. AWS Config Rules

      4. AWS Trusted Advisor

    50. As the AWS security lead, you are concerned that your IAM users have overly permissive
      permissions. Which element of IAM would you check to determine if permissions were not
      being used to allow you to implement the principle of least privilege?

      1. Permissions

      2. Policy Usage

      3. Policy Versions

      4. Access Advisor

    51. You have been asked by your CTO to provide a list of all the EC2 instances within your
      production network that have missing patches. Which approach would be best to obtain this
      list?

      1. Use AWS Config to find a list of non-compliant patches across your EC2 fleet.

      2. Search AWS CloudTrail Patch logs to determine which patches are missing.

      3. Use Patch Manager within AWS Systems Manager.

      4. Use Query the Patch versions using Amazon CloudWatch metrics.


    52. The security perspective of the AWS Cloud Adoption Framework covers four primary control
      areas: Directive controls, preventive controls, detective controls, and which other?

      1. Responsive controls

      2. Reactive controls

      3. Security controls

      4. Access controls

    53. To maintain a high level of security across a VPN connection, it consists of two
      tunnels, allowing a cryptographic method of communication between two endpoints. Select the
      missing word:

      1. SSL

      2. TLS

      3. IPsec

      4. AES256

    54. A team of developers is currently assuming a role that has AmazonS3FullAccess permissions,
      in addition to varying levels of permissions to Amazon CloudWatch, Amazon SQS, AWS
      Lambda, and Amazon SNS. However, temporarily, you need to limit the developers in your
      AWS account to only read-only access to Amazon S3 while maintaining all other permissions.
      Which method would be best for this that also has the least administrative effort?

      1. Create a new role with the same access to Amazon CloudWatch, Amazon SQS, AWS
        Lambda, and Amazon SNS, in addition to
        AmazonS3ReadOnlyAccess.

      2. Set an in-line policy against the role with AmazonS3ReadOnlyAccess.

      3. Set a permission boundary against the role with AmazonS3ReadOnlyAccess.

      4. Set an AWS Organizations policy to AmazonS3ReadOnlyAccess and associate it with the AWS
        account containing the developers.

    55. When working with the security components of VPCs, there are some key elements: Network
      Access Control Lists and security groups. Understanding the difference between them is key.
      Which of the following statements are true? (Choose three.)

      1. NACLs are stateless

      2. Security groups are stateless.

      3. There are no Deny rules for security groups.

      4. There are no Deny rules for NACLs.

      5. There is a Rule# field for NACLs.

      6. There is a Rule# field for security groups.

    56. In a three-way handshake where a client-server is establishing a connection, which is the
      correct order for the operations to be carried out in?

      1. Syn, Syn-Ack, Ack

      2. Syn-Ack, Syn, Ack

      3. Ack, Syn, Syn, Ack

      4. Syn, Ack, Syn, Ack

    57. Working at a mobile gaming company, you have just launched a new game with the hope that it
      will go viral. Using Amazon Cognito, you assigned permissions to users so that they can access
      the AWS resources that are used within the mobile app by using temporary credentials. This
      access can be granted to both federated users and anonymous guest users. Which component of
      Amazon Cognito enables you to assign permissions?

      1. User Pools

      2. Resource Pools

      3. Identity Pools

      4. IAM Pools


    58. What action is being carried out against AWS Secrets Manager using this AWS CLI command?


      aws secretsmanager put-resource-policy --secret-id My_RDS_Secret --resource-policy
      file://resource.json


      1. An identity-based policy is being applied to a group named My_RDS_secret.

      2. A resource-based policy is being applied to a secret named My_RDS_Secret.

      3. A resource-based policy named My_RDS_secret is being applied to a secret named

        resource.json.

      4. An identity-based policy is being applied to a secret named My_RDS_Secret using
        the
        resource.json resource policy file.

    59. From a threat detection and management perspective, which AWS service would you use to
      provide a single-pane-of-glass view across your infrastructure, thus bringing all of your
      security statistical data into a single place and presented in a series of tables and graphs?

      1. Amazon GuardDuty

      2. Amazon Detective

      3. Amazon Macie

      4. AWS Security Hub

    60. You have just completed a large deployment of patches to your EC2 instances to ensure they all
      have the latest patches to minimize security vulnerabilities across your fleet. Your manager has
      asked you for compliance data to confirm your environment meets the patching criteria set out
      by the business. Which methods can be used to view compliance data? (Choose three.)

      1. AWS Systems Manager Artifact

      2. AWS Systems Manager Explore

      3. AWS Systems Manager Configuration Compliance

      4. AWS Systems Manager Managed Instances


    61. You have been asked to implement an additional level of security within some of your IAM
      identity-based policies to restrict access based on the source IP address of 10.0.0.0/16 of the
      request. What optional parameter could you add to the policies to enforce this restriction?


      1. “Criteria”: {

        “IpAddress”: {

        “aws:SourceIp”: “10.0.0.0/16”


      2. “Condition”: {

        “IpAddress”: {

        “aws:SourceIp”: “10.0.0.0/16”


      3. “State”: {

        “IpAddress”: {

        “aws:SourceIp”: “10.0.0.0/16”


      4. “Context”: {

        “IpAddress”: {

        “aws:SourceIp”: “10.0.0.0/16”


    62. To enhance the security of your APIs that are being used with the AWS API Gateway service,
      which method can't be used to control authentication and authorization?

      1. Resource-based policies

      2. VPC Endpoint Policies

      3. Lambda Authorizers

      4. AWS Config Rules

    63. Which AWS Service can be used during SAML Federation connectivity to your AWS
      Management Console to gain temporary credentials and to create a console sign-in URL using
      the credentials generated by the service?

      1. AWS SQS

      2. AWS STS

      3. AWS SWS

      4. AWS SNS


    64. To help you maintain a consistent and measurable condition of your EC2 instances, such as
      network settings, the installation of agents, and joining a Windows domain, you look to use
      AWS Systems Manager to help you manage operations. Which element of the service would
      you use to maintain these settings?

      1. State Manager

      2. Session Manager

      3. Resource Groups

      4. Patch Manager

    65. You have to meet a requirement that states you must allow your private instances to access the
      internet. The solution must be highly available and involve minimal maintenance, and it must
      also have high bandwidth capabilities. A secure method of implementing this access would be
      to implement a NAT. Which NAT would you implement to meet these requirements?

      1. NAT Threshold

      2. NAT Instance

      3. NAT gateway

      4. NAT Transit


Answers



1: 1


11: 1,4


21: 3


31: 2


41: 1,3,4


51: 3


61: 2


2: 3


12: 4


22: 1,5


32: 4


42: 3


52: 1


62: 4


3: 1,2,5


13: 4


23: 4


33: 1


43: 4


53: 3


63: 2


4: 3


14: 1,3,4


24: 2,4


34: 1,2


44: 2


54: 3


64: 1


5: 2


15: 2,5


25: 4


35: 3


45: 1,2


55: 1,3,5


65: 3


6: 3,4


16: 2,4


26: 2


36: 1


46: 3


56: 1



7: 4


17: 3


27: 1,3


37: 3


47: 2


57: 3



8: 4


18: 3


28: 1


38: 2


48: 5


58: 2



9: 3,4,6


19: 2


29: 4


39: 1


49: 2


59: 4









10: 1,6

20: 2,4,5

30: 1

40: 1,3

50: 4

60: 2,3,4


xxviii Assessment Test


Assessment Test

  1. Which one of the following components should not influence an organization’s secu-
    rity policy?

    1. Business objectives

    2. Regulatory requirements

    3. Risk

    4. Cost–benefit analysis

    5. Current firewall limitations

  2. Consider the following statements about the AAA architecture:

    1. Authentication deals with the question “Who is the user?”

    2. Authorization addresses the question “What is the user allowed to do?”

    3. Accountability answers the question “What did the user do?”

    Which of the following is correct?

    1. Only I is correct.

    2. Only II is correct.

    3. I, II, and III are correct.

    4. I and II are correct.

    5. II and III are correct.

  3. What is the difference between denial-of-service (DoS) and distributed denial-of-service
    (DDoS) attacks?

    1. DDoS attacks have many targets, whereas DoS attacks have only one each.

    2. DDoS attacks target multiple networks, whereas DoS attacks target a single network.

    3. DDoS attacks have many sources, whereas DoS attacks have only one each.

    4. DDoS attacks target multiple layers of the OSI model and DoS attacks only one.

    5. DDoS attacks are synonymous with DoS attacks.

  4. Which of the following options is incorrect?

    1. A firewall is a security system aimed at isolating specific areas of the network and delim-
      iting domains of trust.

    2. Generally speaking, the web application firewall (WAF) is a specialized security element
      that acts as a full-reverse proxy, protecting applications that are accessed through HTTP.

    3. Whereas intrusion prevention system (IPS) devices handle only copies of the packets and
      are mainly concerned with monitoring and alerting tasks, intrusion detection system
      (IDS) solutions are deployed inline in the traffic flow and have the inherent design goal
      of avoiding actual damage to systems.

      Assessment Test xxix


    4. Security information and event management (SIEM) solutions are designed to collect
      security-related logs as well as flow information generated by systems (at the host or the
      application level), networking devices, and dedicated defense elements such as firewalls,
      IPSs, IDSs, and antivirus software.

  5. In the standard shared responsibility model, AWS is responsible for which of the following
    options?

    1. Regions, availability zones, and data encryption

    2. Hardware, firewall configuration, and hypervisor software

    3. Hypervisor software, regions, and availability zones

    4. Network traffic protection and identity and access management

  6. Which AWS service allows you to generate compliance reports that enable you to evaluate
    the AWS security controls and posture?

    1. AWS Trusted Advisor

    2. AWS Well-Architected Tool

    3. AWS Artifact

    4. Amazon Inspector

  7. Which of the following contains a definition that is not a pillar from the AWS Well-Architected
    Framework?

    1. Security and operational excellence

    2. Reliability and performance efficiency

    3. Cost optimization and availability

    4. Security and performance efficiency

  8. Which of the following services provides a set of APIs that control access to your resources
    on the AWS Cloud?

    1. AWS AAA

    2. AWS IAM

    3. AWS Authenticator

    4. AWS AD

  9. Regarding AWS IAM principals, which option is not correct?

    1. A principal is an IAM entity that has permission to interact with resources in the AWS
      Cloud.

    2. They can only be permanent.

    3. They can represent a human user, a resource, or an application.

    4. They have three types: root users, IAM users, and roles.

  10. Which of the following is not a recommendation for protecting your root user credentials?

    1. Use a strong password to help protect account-level access to the management console.

    2. Enable MFA on your AWS root user account.

      xxx Assessment Test


    3. Do not create an access key for programmatic access to your root user account unless
      such a procedure is mandatory.

    4. If you must maintain an access key to your root user account, you should never rotate it
      using the AWS Console.

  11. In AWS Config, which option is not correct?

    1. The main goal of AWS Config is to record configuration and the changes of the
      resources.

    2. AWS Config Rules can decide if a change is good or bad and if it needs to execute an
      action.

    3. AWS Config cannot integrate with external resources like on-premises servers and appli-
      cations.

    4. AWS Config can provide configuration history files, configuration snapshots, and config-
      uration streams.

  12. AWS CloudTrail is the service in charge of keeping records of API calls to the AWS Cloud.
    Which option is not a type of AWS CloudTrail event?

    1. Management

    2. Insights

    3. Data

    4. Control

  13. In Amazon VPCs, which of the following is not correct?

    1. VPC is the acronym of Virtual Private Cloud.

    2. VPCs do not extend beyond an AWS region.

    3. You can deploy only private IP addresses from RFC 1918 within VPCs.

    4. You can configure your VPC to not share hardware with other AWS accounts.

  14. In NAT gateways, which option is not correct?

    1. NAT gateways are always positioned in public subnets.

    2. Route table configuration is usually required to direct traffic to these devices.

    3. NAT gateways are highly available by default.

    4. Amazon CloudWatch automatically monitors traffic flowing through NAT gateways.

  15. In security groups, which option is not correct?

    1. Security groups only have allow (permit) rules.

    2. The default security group allows all inbound communications from resources that are
      associated to the same security group.

    3. You cannot have more than one security group associated to an instance’s ENI.

    4. The default security group allows all outbound communications to any destination.

      Assessment Test xxxi


  16. In network ACLs, which option is not correct?

    1. They can be considered an additional layer of traffic filtering to security groups.

    2. Network ACLs have allow and deny rules.

    3. The default network ACL has only one inbound rule, denying all traffic from all proto-
      cols, all port ranges, from any source.

    4. A subnet can be associated with only one network ACL at a time.

  17. In AWS KMS, which option is not correct?

    1. KMS can integrate with Amazon S3 and Amazon EBS.

    2. KMS can be used to generate SSH access keys for Amazon EC2 instances.

    3. KMS is considered multitenant, not a dedicated hardware security module.

    4. KMS can be used to provide data-at-rest encryption for RDS, Aurora, DynamoDB, and
      Redshift databases.

  18. Which option is not correct in regard to AWS KMS customer master keys?

    1. A CMK is a 256-bit AES for symmetric keys.

    2. A CMK has a key ID, an alias, and an ARN (Amazon Resource Name).

    3. A CMK has two policies roles: key administrators and key users.

    4. A CMK can also use IAM users, IAM groups, and IAM roles.

  19. Which of the following actions is not recommended when an Amazon EC2 instance is com-
    promised by malware?

    1. Take a snapshot of the EBS volume at the time of the incident.

    2. Change its security group accordingly and reattach any IAM role attached to the
      instance.

    3. Tag the instance as compromised together with an AWS IAM policy that explicitly
      restricts all operations related to the instance, the incident response, and forensics teams.

    4. When the incident forensics team wants to analyze the instance, they should deploy it
      into a totally isolated environment—ideally a private subnet.

  20. Which of the following actions is recommended when temporary credentials from an
    Amazon EC2 instance are inadvertently made public?

    1. You should assume that the access key was compromised and revoke it immediately.

    2. You should try to locate where the key was exposed and inform AWS.

    3. You should not reevaluate the IAM roles attached to the instance.

    4. You should avoid rotating your key.

  21. Which of the following options may not be considered a security automation trigger?

    1. Unsafe configurations from AWS Config or Amazon Inspector

    2. AWS Security Hub findings

    3. Systems Manager Automation documents

    4. Event from Amazon CloudWatch Events

      xxxii Assessment Test


  22. Which of the following options may not be considered a security automation response task?

    1. An AWS Lambda function can use AWS APIs to change security groups or network
      ACLs.

    2. A Systems Manager Automation document execution run.

    3. Systems Manager Run Command can be used to execute commands to multiple hosts.

    4. Apply a thorough forensic analysis in an isolated instance.

  23. Which of the following may not be considered a troubleshooting tool for security in AWS
    Cloud environments?

    1. AWS CloudTrail

    2. Amazon CloudWatch Logs

    3. AWS Key Management Service

    4. Amazon EventBridge

  24. Right after you correctly deploy VPC peering between two VPCs (A and B), inter-VPC traffic
    is still not happening. What is the most probable cause?

    1. The peering must be configured as transitive.

    2. The route tables are not configured.

    3. You need a shared VPC.

    4. You need to configure a routing protocol.

  25. A good mental exercise for your future cloud security design can start with the analysis of
    how AWS native security services and features (as well as third-party security solutions) can
    replace your traditional security controls. Which of the options is not a valid mapping bet-
    ween traditional security controls and potential AWS security controls?

    1. Network segregation (such as firewall rules and router access control lists) and security
      groups and network ACLs, Web Application Firewall (WAF)

    2. Data encryption at rest and Amazon S3 server-side encryption, Amazon EBS encryption,
      Amazon RDS encryption, and other AWS KMS-enabled encryption features

    3. Monitor intrusion and implementing security controls at the operating system level
      versus Amazon GuardDuty

    4. Role-based access control (RBAC) versus AWS IAM, Active Directory integration
      through IAM groups, temporary security credentials, AWS Organizations

Answers to Assessment Test xxxiii


Answers to Assessment Test

  1. E. Specific control implementations and limitations should not drive a security policy. In fact,
    the security policy should influence such decisions, and not vice versa.

  2. D. Accountability is not part of the AAA architecture; accounting is.

  3. C. When a DoS attack is performed in a coordinated fashion, with a simultaneous use of
    multiple source hosts, the term distributed denial-of-service (DDoS) is used to describe it.

  4. C. It’s the other way around.

  5. C. AWS is responsible for its regions, availability zones, and hypervisor software. In the stan-
    dard shared responsibility model, AWS is not responsible for user-configured features such

    as data encryption, firewall configuration, network traffic protection, and identity and access
    management.

  6. C. AWS Artifact is the free service that allows you to create compliance-related reports.

  7. C. Availability is not a pillar from the AWS Well-Architected Framework.

  8. B. AWS Identity and Access Management (IAM) gives you the ability to define authentication
    and authorization methods for using the resources in your account.

  9. B. IAM principals can be permanent or temporary.

  10. D. If you must maintain an access key to your root user account, you should regularly rotate
    it using the AWS Console.

  11. C. AWS Config can also integrate with external resources like on-premises servers and appli-
    cations, third-party monitoring applications, or version control systems.

  12. D. CloudTrail events can be classified as management, insights, and data.

  13. C. You can also assign public IP addresses in VPCs.

  14. C. You need to design your VPC architecture to include NAT gateway redundancy.

  15. C. You can add up to five security groups per network interface.

  16. C. The default network ACL also has a Rule 100, which allows all traffic from all protocols,
    all port ranges, from any source.

  17. B. Key pairs (public and private keys) are generated directly from the EC2 service.

  18. D. IAM groups cannot be used as principals in KMS policies.

  19. B. To isolate a compromised instance, you need to change its security group accordingly and
    detach (not reattach) any IAM role attached to the instance. You also remove it from Auto
    Scaling groups so that the service creates a new instance from the template and service inter-
    ruption is reduced.

    xxxiv Answers to Assessment Test


  20. A. As a best practice, if any access key is leaked to a shared repository (like GitHub)—even if
    only for a couple of seconds—you should assume that the access key was compromised and
    revoke it immediately.

  21. C. Systems Manager Automation documents are actually a security automation
    response task.

  22. D. A forensic analysis is a detailed investigation for detecting and documenting an incident.
    It usually requires human action and analysis.

  23. C. AWS KMS is a managed service that facilitates the creation and control of the encryption
    keys used to encrypt your data, but it doesn’t help you to troubleshoot in other services.

  24. B. VPC peering requires route table configuration to direct traffic between a pair of VPCs.

  25. C. Monitor intrusion and security controls at the operating system level can be mapped to
    third-party solutions, including endpoint detection and response (EDR), antivirus (AV), host
    intrusion prevention system (HIPS), anomaly detection, user and entity behavior analytics
    (UEBA), and patching.

36 Chapter 1 Security Fundamentals


Review Questions

  1. Read the following statements and choose the correct option:

  2. Read the following statements and choose the correct option:

  3. What better defines “the property of ensuring that someone cannot deny an action that has
    already been performed so that you can avoid attempts of not being accountable”?

    1. Accountability

    2. Nonrepudiation

    3. Responsibility

    4. Verification

    5. Authentication

  4. Which option correctly defines the AAA architecture?

    1. Accountability, authorization, availability

    2. Authentication, authorization, anonymity

    3. Authentication, authorization, accountability

    4. Authentication, authorization, accounting

    5. Authorization, anonymity, accountability

    Review Questions 37


  5. Which option represents the seven OSI model layers in the correct order?

    1. Physical, Data Link, Network, Transport, Session, Presentation, and Application

    2. Physical, Data Link, Network, Transport, Session, Application, and Presentation

    3. Physical, Data Link, Routing, Transport, Session, Presentation, and Application

    4. Bit, Frame, Packet, Connection, Session, Coding, and User Interface

    5. Physical, Media Access Control, Network, Transport, Session, Presentation, and
      Application

  6. Which of the following options is not correct?

    1. UDP is part of the TCP/IP stack.

    2. IP can be related to the Network layer of the OSI model.

    3. ICMP, OSPF, and BGP are dynamic routing protocols.

    4. TCP is a connection-oriented and reliable transport protocol.

    5. UDP is a connectionless and unreliable transport protocol.

  7. Which well-known class of cyberattacks is focused on affecting the availability of an appli-
    cation, connectivity device, or computing hosts?

    1. Man-in-the-middle

    2. Phishing

    3. Malware

    4. Reconnaissance

    5. Denial of service

  8. Which of the following options is not correct?

    1. A firewall is a security system aimed at isolating specific areas of the network and
      delimiting domains of trust.

    2. A typical WAF analyzes each HTTP command, thus ensuring that only those actions
      specified on the security policy can be performed.

    3. All VPNs were created to provide a secure extension of corporate networks, without
      the need of using a dedicated infrastructure but ensuring data confidentiality.

    4. IPsec deals with integrity, confidentiality, and authentication.

    5. SIEM stands for security information and event management.

  9. Which security framework was created with the goal of increasing the level of protection
    for issuers of credit cards?

    1. HIPAA

    2. GDPR

    3. PCI DSS

    4. NIST CSF

    5. CS STAR

    38 Chapter 1 Security Fundamentals


  10. Which concept guides the zero-trust security model?

  1. Develop, implement, monitor continuously, test, and improve

  2. Before, during, and after an attack

  3. Principle of least privilege

  4. In transit and at rest

  5. Automation

Review Questions 61


Be familiar with the AWS Marketplace. You can find many security solutions in the AWS
Marketplace that you can use to improve your security posture. You can use strategic AWS
security partners. You can use your own licenses in a
Bring Your Own License model. The
pay-as-you-go model is another option available to you.


Review Questions

  1. In an Amazon EC2 instance deployment, who is in charge of the data center facilities secu-
    rity, based on the Shared Responsibility Model?

    1. AWS

    2. The customer

    3. The responsibility is shared

    4. Depends on the region

  2. In a database implementation of Amazon RDS running MySQL, who is in charge of the
    operating system security patching, based on the Shared Responsibility Model?

    1. The customer

    2. The responsibility is shared

    3. AWS

    4. Depends on the region

  3. From where can you download ISO 27001, ISO 27017, ISO 27018, and other certification
    files in PDF format?

    1. AWS Security Portal

    2. AWS Artifact

    3. AWS GuardDuty

    4. AWS public website

  4. What is the SOC-1 Type 2 report?

    1. It evaluates the effectiveness of AWS controls that might affect internal controls over
      financial reporting (ICFR).

    2. It is a summary of the AWS SOC 2 report.

    3. It evaluates the AWS controls that meet the American Institute of Certified Public
      Accountants (AICPA) criteria for security, availability, and confidentiality.

    4. None of the above

      62 Chapter 2 Cloud Security Principles and Frameworks


  5. What is the SOC 2 Security, Availability, & Confidentiality report?

    1. It evaluates the effectiveness of AWS controls that might affect internal controls over
      financial reporting (ICFR).

    2. It is a summary of the AWS SOC 2 report.

    3. It evaluates the AWS controls that meet the American Institute of Certified Public
      Accountants (AICPA) criteria for security, availability, and confidentiality.

    4. None of the above

  6. Which option best defines the AWS Well-Architected Framework?

    1. It is a framework developed by AWS that describes best practices to help customers
      implement security best practices in their environments.

    2. It is a paid service developed by AWS that defines best practices to help customers
      implement best practices in their environments, improving security, operational excel-
      lence, reliability, performance efficiency, and cost optimization.

    3. It is a no-cost framework developed by AWS that defines best practices, helping the
      customer to implement their environments, improving security, operational excellence,
      reliability, performance efficiency, and cost optimization.

    4. It is a tool in the AWS console that helps customers automatically implement
      architecture best practices.

  7. What are the design principles defined in the AWS Well-Architected security pillar?

    1. Identity and access management, detective controls, infrastructure protection

    2. Data protection, identity and access management

    3. Implement a strong identity foundation, enable traceability, apply security at all layers,
      automate security best practices, keep people away from data, and prepare for security
      events

    4. Implement a strong identity foundation, enable traceability, apply security at all layers,
      automate security best practices, protect data in transit and at rest, keep people away
      from data, and prepare for security events

    5. Identity and access management, detective controls, infrastructure protection, data
      protection

  8. Who is in charge of the AWS hypervisor security when you, as a customer, are deploying
    Amazon EC2 instances in the AWS Cloud?

    1. AWS is always in charge of the hypervisor security.

    2. The customer is in charge of the hypervisor security.

    3. It depends on the type of instance.

    4. There is a Shared Responsibility Model, so the customer and AWS have the responsi-
      bility of the hypervisor security.

      Review Questions 63


  9. What are the best practices areas for security in the cloud covered by the Well-Architected
    security pillar? (Choose all that apply.)

    1. Identity and access management

    2. Infrastructure protection

    3. Security awareness

    4. Incident response

    5. Security automation

    6. Detective controls

    7. Authentication and authorization

    8. Data protection

  10. You are looking for an endpoint protection solution, and you want to use the same solution
    that you are using on-premises today to improve your workload protection running on your
    Amazon EC2 instances. Where can you find endpoint protection solutions to protect your
    servers running in the AWS Cloud?

    1. AWS Console

    2. AWS website

    3. AWS Security Services

    4. AWS Marketplace

104 Chapter 3 Identity and Access Management


Review Questions

  1. When you first create your AWS account, what are the steps you take to protect your root
    account and provide secure, limited access to your AWS resources? (Choose three.)

    1. Create access keys and secret keys for the root account.

    2. Create an IAM user with the AdministratorAccess role to perform day-to-day
      management of your AWS resources.

    3. Create a strong password for the root account.

    4. Enable multifactor authentication for the root account.

  2. When you’re creating resource-based policies, can you use IAM groups as principals?

    1. Yes.

    2. No.

    3. This relationship does not make sense.

    4. More information is needed.

  3. When writing a resource-based policy, which are the minimum required elements for it
    to be valid?

    1. Version, Statement, Effect, Resource, and Action

    2. Version, Statement, Effect, Principal, SID, and Action

    3. Version, Statement, Effect, Principal, and Action

    4. Version, Statement, Effect, Resource, and Condition

  4. What IAM feature can you use to control the maximum permission an identity-based policy
    can grant to an IAM entity?

    1. Service control policy (SCP)

    2. Session policies

    3. Permissions boundary

    4. All the options above

  5. How do you enforce SSL when you have enabled cross-region replication for your Amazon
    S3 bucket?

    1. In the configuration wizard, you must select Use SSL when you enable cross-region rep-
      lication.

    2. Create a bucket policy that denies requests with a condition where aws:SecureTransport

      is false.

    3. SSL is enabled by default when using cross-region replication.

    4. Enable SecureTransport in the Amazon S3 console.

      Review Questions 105


  6. You created an S3 bucket and assigned it a resource-based policy that allows users from
    other AWS accounts to upload objects to this bucket. What is the only way to manage the
    permissions for the uploaded objects?

    1. Create a bucket policy specifying the path where the objects were uploaded.

    2. Create an IAM role that gives full access permissions to users and groups that have this
      role attached.

    3. The owner of the objects must use ACLs to manage the permissions.

    4. None of the above.

  7. Which of the following is not true of the temporary security credentials issued by AWS
    Security Token Services (STS)?

    1. Temporary credentials are dynamic and generated every time a user requests them.

    2. Once expired, these credentials are no longer recognized by AWS, and any API requests
      made with them are denied.

    3. A user can never under any conditions renew the temporary credentials.

    4. When you issue a temporary security credential, you can specify the expiration interval
      of that credential that can range from a few minutes to several hours.

  8. You created a new AWS Organization for your account using Consolidated Billing. Later you
    learn about service control policies (SCPs). Now you want to use them in your organization.
    What do you need to do to take advantage of SCPs?

    1. You can start using SCPs without any change in the configuration of your AWS Organi-
      zation.

    2. You should use the master account to start creating SCPs.

    3. You must log in with the root account credentials to use SCPs.

    4. You should open the AWS Organizations Management Console and on the Settings tab
      choose Begin Process To Enable All Features.

  9. Developers in your company are building a new platform where users will be able to log
    in using their social identity providers and upload photos to an Amazon S3 bucket. Which

    actions should you take to enable the users to authenticate to the web application and upload
    photos to Amazon S3? (Choose two.)

    1. Configure the SAML identity provider in Amazon Cognito to map attributes to the
      Amazon Cognito user pool attributes.

    2. Configure Amazon Cognito for identity federation using the required social identity pro-
      viders.

    3. Create an Amazon Cognito group and assign an IAM role with permissions to upload
      files to the Amazon S3 bucket.

    4. Create an Amazon S3 bucket policy with public access to upload files.

    5. Create an IAM identity provider, with Provider Type set to OpenID Connect.

      106 Chapter 3 Identity and Access Management


  10. One of your administrators created an Amazon S3 pre-signed URL and shared it with an
    external customer to upload system logs. However, the user receives Access Denied when they
    try to upload the logs. What are the possible reasons that the user cannot upload the logs?
    (Choose two.)

    1. Users uploading the files are not providing the correct access and secret keys.

    2. The administrator who generated the pre-signed URL does not have access to the S3
      bucket where the logs need to be uploaded.

    3. There is a bucket policy not allowing users to access the bucket using pre-signed URLs.

    4. The pre-signed URL has expired.

Review Questions 155


Review Questions

  1. Read the following statements and choose the correct option:

    1. By default, a trail delivers management events.

    2. By default, a trail delivers insight events.

    3. By default, a trail delivers data events.

      1. I, II, and III are correct.

      2. Only I is correct.

      3. Only II is correct.

      4. Only III is correct.

  2. What is the representation of a point-in-time view of the attributes of a monitored resource
    in AWS Config called?

    1. Configuration snapshot

    2. Configuration item

    3. Configuration stream

    4. Configuration record

  3. Read the following statements about AWS Config Rules and choose the correct option:

    1. A rule can be a custom rule.

    2. A rule can be a managed rule.

    3. A rule can be a service-linked rule.

      1. I, II, and III are correct.

      2. I and II are correct.

      3. Only I is correct.

      4. Only II is correct.

  4. Which option do you use to validate the integrity of the log files delivered by AWS
    CloudTrail?

    1. The Amazon S3 validate-files action

    2. The AWS Config cloud-trail-log-file-validation managed rule

    3. The AWS CloudTrail validate-logs action

    4. There is no way to validate those log files’ integrity.

  5. How could you centralize AWS CloudTrail log files from different accounts?

    1. Configure the trail as an organization trail.

    2. Configure the trail from different accounts to deliver to the same S3 bucket.

    3. Configure the Consolidate Trails feature in AWS Organizations.

      156 Chapter 4 Detective Controls


      1. I, II, and III are correct.

      2. I and II are correct.

      3. Only I is correct.

      4. Only II is correct.

  6. Which of the following is not an option to directly subscribe an Amazon CloudWatch Logs
    log group?

    1. Amazon Kinesis Data Streams

    2. Amazon Kinesis Data Firehose

    3. AWS Lambda

    4. Amazon Elasticsearch Service

  7. How could you receive “high resolution” metrics in Amazon CloudWatch?

    1. Publish a custom metric of type “high resolution.”

    2. Selected AWS services produce “high resolution” metrics by default.

    3. Use the modify-resolution request to modify the attribute resolution of a stan-
      dard metric to
      high resolution.

      1. I, II, and III are correct.

      2. I and II are correct.

      3. Only I is correct.

      4. Only II is correct.

  8. Which of the following is not part of the definition of an Amazon EventBridge rule?

    1. Bus

    2. Event pattern

    3. Remediation action

    4. Target

  9. How could you automate responses to a finding reported in Amazon GuardDuty?

    1. Creating a rule in Amazon EventBridge for findings directly received from
      Amazon GuardDuty

    2. Configuring an event in the S3 bucket, directly receiving the findings
      from GuardDuty

    3. Subscribing to the Amazon SNS topic, directly receiving the findings from
      Amazon GuardDuty

      1. I, II, and III are correct.

      2. I and II are correct.

      3. Only I is correct.

      4. Only II is correct.

      Review Questions 157


  10. What is a collection of related findings, grouped by a common attribute and saved as a
    filter in AWS Security Hub, called?

  1. Findings group

  2. Insights

  3. Security standard

  4. Integrations

Review Questions 211


Review Questions

  1. Read the following statements and choose the correct option:

    1. A VPC can extend beyond AWS regions.

    2. A VPC can extend beyond AWS availability zones.

    3. A subnet can extend beyond AWS availability zones.

      1. I, II, and III are correct.

      2. Only I is correct.

      3. Only II is correct.

      4. Only III is correct.

  2. Considering that you gave the CIDR block 172.16.100.128/25 to a subnet, which option
    is correct?

    1. The IP address of the VPC router is 172.16.100.128.

    2. The IP address of the DNS server is 172.16.100.130.

    3. The IP address 172.16.100.131 is the first one available for use in the subnet.

    4. You cannot assign this CIDR block to a subnet.

  3. Read the following statements and choose the correct option:

    1. Internet gateways and egress-only Internet gateways allow Internet-outbound
      traffic.

    2. Both Internet gateways and egress-only Internet gateways support IPv4 and
      IPv6 traffic.

    3. Internet gateways and egress-only Internet gateways support network address
      translation.

      1. Only I is correct.

      2. I and III are correct.

      3. I and II are correct.

      4. I, II, and II are correct.

  4. Which statement about NAT gateways and NAT instances is not correct?

    1. You have to size and manage NAT instances.

    2. You can use NAT instances for port redirection.

    3. You do not have to disable the source/destination check on NAT gateways.

    4. You must assign a security group to a NAT gateway.

    5. You can use NAT instances as Bastion hosts.

    212 Chapter 5 Infrastructure Protection


  5. Read the following statements about security group rules and choose the correct option:

    1. You can allow HTTP from 10.0.40.0/24.

    2. You can block HTTP from your own public IP address.

    3. You can allow HTTP from any source.

      1. I, II, and III are correct.

      2. I and III are correct.

      3. II and II are correct.

      4. Only III is correct.

  6. Which statement about security groups and NACLs is not correct?

    1. You can configure inbound and outbound rules on both.

    2. You have to explicitly configure outbound rules to allow return traffic from permitted
      connections to instances associated to a security group.

    3. You can use a CIDR block as the source in NACLs.

    4. You can use other security groups as the source in security groups.

  7. Read the following statements about AWS Elastic Load Balancing and choose the
    correct option:

    1. ALBs, NLBs, and CLBs support health checks, CloudWatch metrics, and
      AZ failover.

    2. NLBs can support AWS Lambda functions as targets.

    3. CLBs can only be used with EC2-classic implementations.

      1. Only I is correct.

      2. I and II are correct.

      3. II and III are correct.

      4. I, II, and III are correct.

  8. What can the VPC Flow Logs monitor?

    1. Amazon DNS traffic

    2. DHCP traffic

    3. Traffic from a Windows instance

    4. Traffic destined to the VPC router’s reserved IP address

  9. AWS WAF web ACL rules cannot detect which of the following conditions?

    1. SQL injection attacks

    2. Cross-site scripting attacks

    3. Length of requests

    4. HTTP response headers

    5. Country that requests originate from

    Review Questions 213


  10. Which statement about AWS Shield is correct?

  1. AWS Shield Standard offers support of the AWS DDoS response team (DRT).

  2. AWS Shield Advanced is charged per month.

  3. AWS Shield Standard is disabled by default.

  4. AWS Shield Advanced is enabled by default.

278 Chapter 6 Data Protection


Review Questions

  1. Which of the following methods can be used to encrypt data in S3 buckets? (Choose three.)

    1. ACM using symmetric keys

    2. SSE-S3

    3. SSE-KMS

    4. SSE-C

  2. Which of the following methods is the cheaper encryption method that you can use in
    S3 buckets?

    1. SSE-S3

    2. SSE-KMS

    3. SSE-C

    4. The default S3 encryption method

  3. You are configuring a CMK using the KMS service console. Which permissions you should
    define and configure in the JSON security policy? (Choose three.)

    1. The IAM groups that can read the key

    2. IAM users that can be the CMK administrators

    3. IAM roles that can be the CMK administrators

    4. The application pool that will access the CMK

    5. IAM roles that can use the CMK

    6. The asymmetric algorithms that can be used

    7. The Cognito pool that will be used to authenticate the user to read the keys

  4. What happens when you delete a CMK using KMS?

    1. The key is deleted immediately.

    2. AWS KMS enforces a 60-day waiting period before you can delete a CMK.

    3. AWS KMS enforces a minimum of 7 days up to a maximum of 30 days waiting period
      before you can delete a CMK.

    4. It is impossible to delete a CMK.

  5. Which AWS service should you use to implement a ubiquitous encryption strategy in your
    AWS environment?

    1. Amazon Macie

    2. Amazon Inspector

    3. ACM

    4. AWS KMS

    5. CloudHSM

      Review Questions 279


  6. When should you consider using CloudHSM? (Choose two.)

    1. To meet regulatory needs, such as FIPS 140-2 Level 3 standards

    2. To protect EC2 instances

    3. For SSL offloading

    4. All the times that you need to use KMS

  7. How does key rotation work when you are using a CMK?

    1. AWS KMS rotates automatically every 30 days.

    2. AWS KMS cannot rotate the key, so the user must rotate it manually.

    3. AWS KMS rotates the CMK every 365 days after the user enables automatic key rota-
      tion.

    4. There is no key rotation functionality, and only ACM can rotate keys automatically.

    5. EC2

    6. EKS

    7. S3

    8. Workspaces

    9. IAM

  8. What symmetric algorithm is used when a CMK is created?

    1. AES 128

    2. 3DES

    3. DES

    4. AES 256

Review Questions 297


automating security responses, you can reduce the time required to react to a security
incident, reducing your window of exposure and so reducing the overall risk. In the next
chapter, you will learn more about the tools available in the cloud for automating your
security responses.


Exam Essentials

Know that abuse notifications require attention. Keep in mind that when AWS Security
teams send abuse notifications, they require attention from your security team and usually
require your action as well. Ignoring them could potentially lead to account suspension.

Respond to the notifications that you received from AWS Support through the AWS
Support Center.

Know how to react to compromised credentials. When credentials are compromised,
you need to ensure that no resource created by malicious actors remains in the account,
including resources on your account that you didn’t create, such as EC2 instances and
AMIs, EBS volumes and snapshots, and IAM users. You need to rotate all potentially com-
promised credentials. Changing the password for other users is a safety measure as well.

Know how to react to compromised instances. Investigate compromised instances
for malware, isolate them in the network, and stop or terminate them (ideally taking an

Amazon EBS snapshot for the forensics team to do their root cause analysis). In AWS Mar-
ketplace you’ll find available partner products that can help detect and remove malware.

Know how to use AWS WAF and AWS Shield to mitigate attacks on applica-

tions. Remember considering AWS Shield as a potential answer whenever you see DDoS
attacks and preparing your architecture to withstand the load until AWS Shield acts. Using
AWS CloudFront, AWS Elastic Load Balancing, and Route 53 helps as well. Remember that
you can use AWS WAF to mitigate many different application attacks by adding custom
rules, such as adding a rate limit to prevent scraping and other malicious bot activity.


Review Questions

  1. What is the first action to take when a probable compromise of AWS IAM credentials
    is detected?

    1. Update the security contact of the AWS account.

    2. Deactivate the AWS IAM credentials.

    3. Delete the AWS IAM credentials.

    4. Modify the apps that use the AWS IAM credentials.

      298 Chapter 7 Incident Response


  2. Which of the following AWS services (alone or combined) would suit better the remediate
    phase of an incident lifecycle?

    1. AWS Config

    2. AWS CloudTrail

    3. AWS Systems Manager

    1. Only I

    2. Only III

    3. Combination of II and III

    4. Combination of I and III

  3. Which of the following is NOT a contact information you should always keep updated in
    your AWS account?

    1. Billing contact.

    2. Administrative contact.

    3. Security contact.

    4. Operations contact.

  4. What should you do when the AWS team sends you an abuse report from your resources?

    1. Review only when you receive more than one abuse notice for the same incident.

    2. Review and reply to abuse report team, as soon as possible.

    3. Review and reply to abuse report team, only after you are totally sure about what
      caused the issue.

    4. Review and solve the issue. No need to reply to abuse team, unless you have questions
      about the notification.

  5. Which of the following options minimizes the risk to your environment when testing your
    incident response plan?

    1. Automate containment capability to reduce response times and organizational impact.

    2. Develop an incident management plan that contains incidents and procedures to return
      to a known good state.

    3. Execute security incident response simulations to validate controls and processes.

    4. Use your existing forensics tools on your AWS environment.

  6. The security team detected a user’s abnormal behavior and needs to know if there were any
    changes to the AWS IAM permissions. What steps should the team take?

    1. Use AWS CloudTrail to review the user’s IAM permissions prior to the abnormal
      behavior and compare them to their current IAM permissions.

    2. Use Amazon Macie to review the user’s IAM permissions prior to the abnormal
      behavior and compare them to their current IAM permissions.

      Review Questions 299


    3. Use AWS Config to review the user’s IAM permissions prior to the abnormal behavior
      and compare them to their current IAM permissions.

    4. Use AWS Trusted Advisor to review the user’s IAM permissions prior to the abnormal
      behavior and compare them to their current IAM permissions.

  7. Amazon GuardDuty reported finding an instance of Backdoor:EC2/C&CActivity.B!DNS in the
    production environment, outside business hours, and the security team is wondering how to
    react. What would be the most appropriate action to take?

    1. Instruct the forensics team to review the instance early tomorrow, since it does not
      reflect any immediate threat.

    2. Investigate the image for malware, and isolate, stop, or terminate the instance as soon
      as possible.

    3. Explicitly deny access to security groups to isolate the instance.

    4. Use Amazon Inspector to analyze the vulnerabilities on the instance and shut down
      vulnerable services.

  8. You, the IAM access administrator, receive an abuse notification indicating that your
    account may be compromised. You do not find any unrecognized resource, but you see one
    of your IAM users with the following policy attached, that you did not attach:
    AWSEx-
    posedCredentialPolicy_DO_NOT_REMOVE
    . What would be the most appropriate action
    to immediately start remediation?

    1. Remove the policy since you did not create it and it may be the source of the issue.

    2. Change the access keys for the user and detach the policy as the issue was remediated.

    3. No action is needed since the policy restricts the usage of the user. The user should not
      be deleted. Open a support ticket for instructions on how to proceed.

    4. Delete the user, and be sure to check all regions for unrecognized resources or other
      users with the policy attached.

  9. Amazon GuardDuty reported the finding UnauthorizedAccess:EC2/TorClient related
    to an Amazon EC2 instance. You, as part of the security team, are determining how to
    react. What would be the most appropriate action to take?

    1. Unless you know that the Amazon EC2 instance uses an anonymization network for
      valid business needs, you should isolate or stop the instance since it can indicate that
      your instance is compromised.

    2. You should immediately terminate the instance.

    3. You can safely ignore this finding since it’s only informational, and it’s probably an end
      user using TOR Browser to access your site for privacy reasons.

    4. Use traffic mirroring to analyze the traffic to verify whether it is legitimate.

  10. You are a security analyst at a company. You recently discovered that developers embed
    access keys on the code in many business applications. You are concerned about potential
    credentials being exposed by mistake. Which are the most simple and effective actions to
    mitigate the risk? (Choose three.)

300 Chapter 7 Incident Response


  1. Instruct the developers to use AWS Secrets Manager or AWS Systems Manager Param-
    eter Store to avoid storing credentials in code.

  2. Enable Amazon Macie to detect access keys exposed to the public.

  3. Upgrade the support plan to Business or Enterprise Support and use AWS Trusted
    Advisor to detect exposed credentials.

  4. Build an AWS Lambda function to check repositories and notify using Amazon Simple
    Notification Service.

  5. Use Amazon CodeGuru to detect exposed credentials.

Review Questions 335


Review Questions

  1. A company is worried about data loss and would like to detect the Amazon S3 buckets that
    allow access from outside their production account and let a security analyst decide whether
    to close the buckets or allow them to remain open. Which of the following services can be
    combined to accomplish such automation?

    1. Detect buckets with Trusted Advisor, and use an Amazon CloudWatch Events rule to
      trigger an AWS Lambda function to close the bucket.

    2. Use IAM Access Analyzer to detect buckets, and an AWS Security Hub custom action to
      trigger an Amazon CloudWatch Events rule that executes an AWS Lambda function to
      close the bucket.

    3. Use IAM Access Analyzer to detect buckets, and an Amazon CloudWatch Events rule
      that executes an AWS Lambda function to close the bucket.

    4. Use AWS Config rules to detect buckets, and auto-remediate with a Systems Manager
      automation.

  2. A company wants to ensure that there are no buckets without default encryption enabled
    and that if by mistake any administrator removes the default encryption, it should be auto-
    matically corrected to comply with the company’s policy. Which of the following automation
    options could accomplish the requested security objective? (Choose two.)

    1. Use AWS Security Hub’s native finding “PCI.S3.4 S3 buckets should have server-side
      encryption enabled” and trigger an AWS Lambda function to remediate.

    2. Use AWS Config and trigger an AWS Lambda function to remediate.

    3. Use AWS Config to detect using s3-bucket-server-side-encryption-enabled

      and auto-remediate using the AWS-EnableS3BucketEncryption SSM automation.

    4. Use AWS CloudTrail to detect the change on the Amazon S3 bucket properties and
      trigger the Amazon CloudWatch Events rule that executes an AWS Lambda function to
      remediate.

  3. A company wants to ensure that there are no buckets without default encryption enabled and
    that if by mistake any administrator removes the default encryption, it should be automat-
    ically corrected to comply with the company’s policy. Which of the following automations
    could accomplish the requested security objective with the least effort?

    1. Use AWS Security Hub’s native finding “PCI.S3.4 S3 buckets should have server-side
      encryption enabled” and trigger an AWS Lambda function to remediate.

    2. Use AWS Config and trigger an AWS Lambda function to remediate.

    3. Use AWS Config to detect using s3-bucket-server-side-encryption-
      enabled
      and auto-remediate using AWS-EnableS3BucketEncryption SSM auto-
      mation.

    4. Use AWS CloudTrail to detect the change on the Amazon S3 bucket properties and
      trigger an Amazon CloudWatch Events rule that executes an AWS Lambda function to
      remediate.

      336 Chapter 8 Security Automation


  4. A company requires you to detect failed login attempts in the operating system of a criti-
    cal instance and to make that information available to security analysts to investigate and
    decide whether to ignore or isolate. Which of the following actions can be recommended?
    (Choose two.)

    1. Sending all the OS logs to a SIEM among the AWS Security Hub’s partners and using
      a SIEM rule to create a finding in AWS Security Hub, then using AWS Security Hub’s
      custom actions to ease isolation

    2. Sending all the OS logs to AWS Security Hub and AWS Security Hub’s actions to auto-
      mate resolution

    3. Sending OS logs to Amazon CloudWatch logs through the agent, creating a metric filter
      and an alarm, and triggering an AWS Lambda that creates the finding in AWS Security
      Hub, then using AWS Security Hub’s custom actions to ease isolation

    4. Sending all the OS logs to a SIEM among the AWS Security Hub’s partners and using
      a SIEM rule to create a finding in AWS Security Hub, then using Amazon CloudWatch
      Events to trigger an AWS Lambda function to isolate

  5. A company’s chief information security officer (CISO) wishes to stop all instances where
    crypto-mining is detected in an automated approach for nonproduction accounts and in a
    semi-automated way for the production accounts. Which of the following security automa-
    tions could help the company achieve this result? (Choose two.)

    1. Using Amazon GuardDuty to detect crypto-mining, and create AWS Security Hub’s
      custom actions to stop the instance

    2. Using AWS Config to detect crypto-mining and Amazon CloudWatch events rule to
      trigger an AWS Lambda function that changes the security groups isolating the instance

    3. Using Trusted Advisor to detect crypto-mining and Amazon CloudWatch events rule to
      trigger an AWS Lambda function that changes the security groups isolating the instance

    4. Using Amazon GuardDuty to detect crypto-mining, and an Amazon CloudWatch events
      rule to trigger an AWS Lambda function that changes the security groups isolating the
      instance

  6. Which of the following services are more suited to detect and correct configuration drifts?
    (Choose two.)

    1. AWS Trusted Advisor

    2. AWS Config

    3. AWS Systems Manager State Manager

    4. AWS CloudTrail

    5. Amazon GuardDuty

  7. Which of the following services are integrated in AWS Security Hub to generate findings?
    (Choose three.)

    1. Amazon EC2

    2. Amazon Inspector

    3. Amazon GuardDuty

    4. AWS CloudTrail

      Review Questions 337


    5. Amazon Macie

    6. Amazon Cognito

  8. The chief information security officer (CISO) of a company has just adopted Systems Man-
    ager Session Manager as the standard for managing remote access to their instances and
    bastions that accept inbound connections from specific IP addresses. Now, the CISO needs
    to detect any SSH opened to the world in the security groups and to revoke those accesses.
    What automation can be implemented to lock down these security groups?

    1. Use AWS Config to detect, and use Auto-Remediate.

    2. Use Trusted Advisor to detect, and use Auto-Remediate.

    3. Use AWS Config to execute an AWS Lambda function.

    4. Use AWS Systems Manager Session Manager to restrict access by other means.

  9. A company’s security analyst detected a malware (Remote Access Tool) on some instances
    that run web servers and are in an Auto Scaling group that maintains at least other 20
    instances. No data is stored on those web servers, and while the Incident Forensics team
    analyzes how they got in, the security analyst wants to automate the rebuild of any compro-
    mised instance to ensure that the malware was removed. How would you suggest to proceed?
    (Choose three.)

    1. Run an antimalware software scan to remove the malware.

    2. Enable Amazon GuardDuty, and configure an Amazon CloudWatch Events rule to
      trigger a Run command execution to reinstall the web server.

    3. Enable Amazon GuardDuty, and configure an Amazon CloudWatch Events rule to
      trigger the termination of the instance when Remote Access Tools are detected.

    4. Use host intrusion prevention systems from the partners in the marketplace to harden
      the instances.

    5. Use AWS Systems Manager Patch Manager to patch the instances.

  10. A company needs to ensure all buckets on an account (that has only static websites) are con-
    figured with versioning to recover quickly from defacement attacks and wants to make sure
    that if by mistake someone unconfigures versioning, it should automatically be turned on
    again. How can this be accomplished?

    1. Configure Amazon S3 Events to execute an AWS Lambda function that sets Versioning
      to Enabled when “Changes In Bucket Properties” occurs.

    2. Use a native AWS Config rule and Auto-Remediate.

    3. Use AWS Security Hub to detect Amazon S3 buckets without versioning and create a
      custom action to enable versioning.

    4. Use a custom AWS Config rule and Auto-Remediate.

Review Questions 361


Review Questions

  1. Which of the following services can you use to view activities performed on your
    AWS account?

    1. AWS CloudTrail

    2. IAM

    3. Security Group

    4. AWS CloudWatch Logs

  2. Which of the following services can you use to monitor, store, and access log files?

    1. AWS CloudTrail

    2. AWS CloudWatch Logs

    3. AWS CloudWatch Events

    4. AWS VPC Flow Logs

  3. Which AWS resource can be used to help protect instances within the VPC? (Choose two.)

    1. Security groups

    2. NACL

    3. Routing tables

    4. Instance firewall

  4. NAT gateways are used in network address translation (NAT) to allow instances on which
    type of subnet to connect to the Internet?

    1. Public

    2. Private

    3. DMZ

    4. All types of subnet

  5. Which type of gateway can you use if you need to start a connection from the Internet?

    1. NAT gateway

    2. Internet gateway

    3. VPC gateway

    4. External gateway

  6. Which functionality can you use to connect two VPCs that allows you to have direct traffic
    between them?

    1. AWS routing tables

    2. AWS Internet gateway

    3. AWS VPC peering

    4. AWS Transit gateway

      362 Chapter 9 Security Troubleshooting on AWS


  7. Which of the following features makes it possible to capture information about IP
    traffic on VPC?

    1. AWS CloudTrail

    2. AWS CloudWatch Logs

    3. AWS CloudWatch Events

    4. AWS VPC Flow Logs

  8. What three APIs can you use in AWS when working with federated entities in the STS ser-
    vice? (Choose three.)

    1. AssumeRole

    2. AssumeFederatedRole

    3. AssumeRoleWithWebIdentity

    4. AssumeRoleWithSAML

  9. Which feature is used to define the maximum permissions that a policy can grant to an
    IAM entity?

    1. IAM role

    2. Permissions boundary

    3. Groups

    4. IAM limits

  10. Which AWS service facilitates the creation and control of the encryption keys used to encrypt
    your data?

    1. AWS KMS

    2. AWS security group

    3. AWS key policy

    4. AWS IAM

372 Chapter 10 Creating Your Security Journey in AWS


Review Questions

  1. Before migrating to AWS Cloud, you want to leverage a native feature from the cloud pro-
    vider to implement network segmentation. You currently deploy such segmentation in
    your data center via stateful firewall rules. Which of the following options better support
    your decision?

    1. Network ACLs and security groups

    2. Third-party firewall provider in AWS Marketplace

    3. Security groups

    4. Network ACLs

    5. AWS WAF

  2. How can you deploy and manage host intrusion prevention system (HIPS) in your Amazon
    EC2 environment?

    1. AWS Config run command

    2. AWS CloudFormation

    3. Amazon GuardDuty

    4. AWS Firewall Manager

    5. Third-party solution

  3. Your organization requires you to implement deep packet inspection solution such as an
    intrusion prevention system in their AWS environment. Which option is the easiest way to
    deploy such solution?

    1. Amazon GuardDuty

    2. Third-party solution from AWS Marketplace

    3. AWS Shield Advanced

    4. AWS Security Hub

    5. Amazon Inspector

  4. Your organization currently deploys a role-based access control (RBAC) system based
    on an AAA server integrated with an Active Directory system. As their security officer,

    you will recommend which of the following options to deploy RBAC on your AWS Cloud
    environments?

    1. AWS Organizations

    2. AWS Account Management

    3. AWS RBAC

    4. AWS IAM, AD Integration, and AWS CloudTrail

    5. AWS IAM and AD Integration

      Review Questions 373


  5. As a CISO in a financial services conglomerate, you are requested to advise the strongest
    way to isolate applications from different lines of business (such as retail banking, corpo-
    rate banking, payment, and investments) during an all-in migration to AWS Cloud. Which
    option better supports this request?

    1. Network ACLs

    2. Security groups

    3. Subnets

    4. VPCs

    5. Accounts

  6. What is the easiest way to identify critical open ports such as SSH and RDP in your
    Amazon EC2 instances?

    1. Turn on AWS Config basic rules.

    2. Run AWS Trusted Advisor.

    3. Activate Amazon Guard Duty.

    4. Use AWS Systems Manager Session Manager.

    5. Look for logs in AWS CloudTrail.

  7. Within your AWS environment, what is the easiest way to obtain log correlation from
    information from Amazon GuardDuty and Amazon Inspector?

    1. SIEM from AWS Marketplace

    2. AWS CloudTrail

    3. AWS Firewall Manager

    4. AWS Security Hub

    5. Amazon Trusted Advisor

  8. Your organization intends to implement infrastructure as code to increase its application
    development agility. Which of the following elements should the company’s security team
    focus on to detect insecure configurations on AWS Cloud?

    1. AWS Config file

    2. AWS Config run command

    3. AWS CloudFormation template

    4. AWS WAF Web ACL

    5. Amazon GuardDuty listings

  9. You intend to deploy incident response automation via serverless technologies. Which two
    options will support your decision?

    1. AWS CloudFormation

    2. AWS Lambda

    3. Amazon Cloud Guru

    4. AWS Step Functions

    5. AWS Serverless

      374 Chapter 10 Creating Your Security Journey in AWS


  10. Which security practical model can provide a solid establishment of a better foundation in
    terms of deployment, skills and processes, at each phase of a cloud security deployment?

    1. Zero trust

    2. Attack continuum

    3. Security wheel

    4. Absolute security

    5. 3-phased migration

376 Appendix A Answers to Review Questions


Chapter 1: Security Fundamentals

  1. B. The concept of vulnerability is related to a fragility in a computer system, whereas a threat
    is defined by an entity exploiting a vulnerability. A security risk also considers the impact
    resulting from a threat being materialized. Therefore, options B and C are swapped.

  2. A. Confidentiality is concerned with preventing unauthorized disclosure of sensitive
    information and ensuring that the suitable level of privacy is maintained at all stages of data
    processing. Integrity deals with the prevention of unauthorized modification of data and with
    ensuring information accuracy. Availability focuses on ensuring reliability and an acceptable
    level of performance for legitimate users of computing resources. All statements present valid
    methods of addressing such concepts.

  3. B. The sentence refers to the undeniable confirmation that a user or system had in fact per-
    formed an action, which is also known as nonrepudiation.

  4. D. The classic AAA architecture refers to authentication, authorization, and accounting.

  5. A. The seven layers of the Open Systems Interconnection (OSI) model are Physical, Data
    Link, Network, Transport, Session, Presentation, and Application.

  6. C. The Internet Control Message Protocol (ICMP) is not a dynamic routing protocol. All the
    other options are correct.

  7. E. The intention of denial of service (DoS) is to exhaust processing resources (either on
    connectivity devices or computing hosts), thus keeping legitimate users from accessing the
    intended applications.

  8. C. Some VPN technologies, such as Multiprotocol Label Switching (MPLS), do not natively
    provide data confidentiality features such as encryption. All the other options are correct.

  9. C. The Payment Card Industry Data Security Standard (PCI DSS) requires that that credit
    card merchants meet minimum levels of security when they process, store, and transmit
    card holder data. The Health Insurance Portability and Accountability Act (HIPAA) is a set
    of security standards for protecting certain health information that is transferred or held in
    electronic form. The National Institute for Standards and Technology Cybersecurity Frame-

    work (NIST CSF) is a framework that assembles security standards, guidelines, and practices
    that have proved effective and may be used by entities belonging to any market segment. The
    General Data Protection Regulation (GDPR) is a set of rules created by the European Union
    (EU), requiring businesses to protect the personal data and privacy of EU citizens.

  10. C. The zero-trust security model is based on the principle of least privilege, which states that
    organizations should grant the minimal amount of permissions that are strictly necessary

    for each user or application to work. Option A cites the phases of the security wheel model.
    Option B refers to the attack continuum model phases. Option D defines methods of data
    encryption. Option E is not directly related to the zero-trust model.

    Chapter 2: Cloud Security Principles and Frameworks 377


    Chapter 2: Cloud Security Principles and
    Frameworks

    1. A. AWS is always in charge of the facilities’ security, including their data center, regardless of
      the type of service used. Options B and C are wrong because the customer is not accountable
      for AWS data center facilities security in the Shared Responsibility Model. Option D is wrong
      because the Shared Responsibility Model applies to all AWS regions.

    2. C. When you are using an Amazon RDS database, AWS is in charge of most of the secu-
      rity layers, such as physical security, operating system security, database patching, backup,
      and high availability. However, you still need to define the maintenance windows to patch
      the operating system and applications. Options A and B are wrong because the customer
      does not manage the operating system in the Shared Responsibility Model for the container
      services category (which includes Amazon RDS). Option D is wrong because the Shared
      Responsibility Model applies to all AWS regions.

    3. B. The AWS Artifact portal is your go-to resource for compliance-related information. It pro-
      vides on-demand access to AWS’s security and compliance reports and a selection of online
      agreements. Option A is wrong because there is no such portal. Option C is wrong because
      AWS GuardDuty is a service that provides monitoring on AWS Cloud environments. Option
      D is wrong because the AWS public website does not offer such certifications.

    4. A. The AWS SOC 1 Type 2 report evaluates the effectiveness of AWS controls that might
      affect internal controls over financial reporting (ICFR), and the auditing process is aligned
      to the SSAE 18 and ISAE 3402 standards. Options B and C refer to definitions that do not
      apply to the mentioned report.

    5. C. The AWS SOC 2 Security, Availability, & Confidentiality Report evaluates the AWS con-
      trols that meet the AICPA criteria for security, availability, and confidentiality. Options A and
      B refer to definitions that do not apply to the mentioned report.

    6. C. The Well-Architected Framework was created to help cloud architects build secure,
      high-performing, resilient, and efficient infrastructure for their applications. The frame-
      work is freely available to all customers. Option A is wrong because the AWS Well-Archi-
      tected Framework is not only related to security best practices. Option B is wrong because
      the framework is not a paid service. Option D is wrong because the framework is more
      than a tool.

    7. D. The AWS Well-Architected security pillar dives deep into seven design principles for secu-
      rity in the cloud, and the seven principles are:

      Implement a strong identity foundation, enable traceability, apply security at all layers, auto-
      mate security best practices, protect data in transit and at rest, keep people away from data,
      and prepare for security events.

      Options A, B, C, and E are wrong because they contain items that do not include all of the
      previous principles.

      378 Appendix A Answers to Review Questions


    8. A. AWS is always in charge of the hypervisor security. When you start the operating system
      in your Amazon EC2 instance, you are in charge of updating the patches, implementing
      systems configuration best practices, and security policies aligned with your own security
      rules. Still, AWS is in charge of implementing the security patches, hardening and guidelines,
      and best practices in the hypervisor layer. Options B and D are wrong because the customer
      is not accountable for the hypervisor security in the AWS Shared Responsibility Model.
      Option C is wrong because the model applies for all Amazon EC2 instances that use AWS
      hypervisors.

    9. A, B, D, F, H. In the Well-Architected security pillar, there are five best practices areas for
      security in the cloud:

      • Identity and access management

      • Detective controls

      • Infrastructure protection

      • Data protection

      • Incident response

        Options C, E, and G do not refer to best practices areas in the Well-Architected security pillar.

    10. D. The AWS Marketplace is where you can find many security solutions that you can use
      to improve your security posture. You can use strategic AWS security partners. You can also

use your own licenses in a Bring Your Own License model. The pay-as-you-go model is also
available to you. Option A is wrong because it is too general. Option B is wrong because the
AWS website does not provide such a service. Option C refers to a web page that does not
provide such services.


Chapter 3: Identity and Access
Management

  1. B, C, D. Options B, C, and D are correct because it is a best practice to activate MFA
    and define a strong password policy for the root account. You should avoid using the
    root account to manage your AWS account. Instead, you should create an IAM user with
    an
    AdministratorAccess role to perform day-to-day administrative tasks. Option

    A is incorrect because you should not create root account access keys, unless strictly

    necessary.

  2. B. Option A is incorrect because an IAM group is cannot be identified as a principal in a
    resource-based or trust policy. Option B is correct because you can only use an IAM group to
    attach policies to multiple users at one time.

    Chapter 4: Detective Controls 379


  3. C. Option A is incorrect because the Resource element is implicitly defined when using
    resource-based policies. Options B and D are also incorrect because SID and Condition are
    not required elements when defining resource-based policies. Option C is correct because it
    specifies the Principal that has access to the bucket and the other minimum elements for a
    valid policy.

  4. C. Option C is correct because a permissions boundary allows you to define the maximum
    permission an identity-based policy can grant to users or roles. Option A is incorrect because
    SCPs apply to the entire account, and option B is incorrect because session policies are used
    when you create a temporary session programmatically.

  5. C. Option C is correct because when you enable cross-region replication for your Amazon
    S3 bucket, TLS/SSL communication is enforced by default.

  6. C. Option C is correct because objects uploaded are owned by the account that uploaded
    them. For the bucket owner to manager these objects, the object owner must first grant per-
    mission to the bucket owner using an object ACL.

  7. C. Option C is correct, because a user actually can renew the temporary credentials before
    their expiration as long as they have permission to do so.

  8. D. The correct option is D because you need to change from Consolidated Billing to Enable
    All Features to start using SCPs.

  9. B, C. Option B is correct because Amazon Cognito for identity federation provides guest
    users on your application with unique identities that, once they log in, can be replaced with
    temporary AWS credentials. Option C is correct because Amazon Cognito groups let users
    attach an IAM role with a policy that gives users access to Amazon S3.

  10. B, D Option B is correct because the credential used to generate the Amazon S3 pre-signed
    URL must have permissions to access the object, or the access will fail. Option D is correct
    because the pre-signed URL has an expiration time that can be valid for up to 7 days.


Chapter 4: Detective Controls

  1. B. By default, an AWS CloudTrail trail only delivers the events of type Management. You
    intentionally need to enable Data and Insights type events for them to appear in a trail.

  2. B. In AWS Config terms, a configuration item is the representation of a single resource’s
    attributes. A configuration snapshot is a JSON file containing the configuration of all the
    monitored resources. A configuration stream is the notification of a change in a monitored
    resource as soon as it happened, delivered in an Amazon SNS topic.

  3. A. There are three types of AWS Config rules: custom rules (trigger a custom AWS Lambda
    function), managed rules (predefined by AWS Config), and service-linked rules (created by
    other AWS services).

    380 Appendix A Answers to Review Questions


  4. C. AWS CloudTrail provides the CLI-based validate-logs command to validate the
    integrity of log files of the trail. In addition to this, you can use a custom method by vali-
    dating the PKI-generated signature strings. No other AWS services (like Amazon S3 or AWS
    Config) provide a mechanism to validate integrity of the AWS CloudTrail files.

  5. B. An AWS CloudTrail trail can be set up as an organizational trail if configured in a man-
    ager AWS Organization account. Another possible centralization mechanism is to store log
    files produced by different accounts into the same Amazon S3 bucket. There is no Consoli-
    date Trails feature in AWS Organizations.

  6. D. Amazon CloudWatch Logs offers the subscription mechanism to deliver a near real-time
    stream of events that can be directly delivered to Amazon Kinesis Data Streams, Amazon
    Kinesis Data Firehose, or an AWS Lambda function. The AWS Management Console pro-
    vides a wizard that allows you to create a subscription linking an Amazon CloudWatch Logs
    log group to a predefined AWS Lambda function that will insert records into an Amazon
    Elasticsearch Service cluster.

  7. C. High-resolution metrics (sub-minute reporting period) are only available for metrics
    reported by external sources (custom metrics). AWS Services publish metrics in Standard res-
    olution. The metric’s resolution is defined at the metric’s creation time; there is no
    modify-
    attribute
    action in Amazon CloudWatch.

  8. C. An Amazon EventBridge rule contains information about the event bus the rule is
    attached to, the event pattern (expression that matches the events of interest), and the target
    service. There is no remediation action in an Amazon EventBridge rule.

  9. B. Amazon GuardDuty findings are automatically delivered to the default bus in Amazon
    EventBridge, and you can also specify to receive those findings in an Amazon S3 bucket you
    own. So, you can automate responses by creating an Amazon EventBridge rule and also
    linking an event to the Amazon S3 bucket you configured to receive the findings. Amazon
    GuardDuty does not provide an option to deliver findings to an Amazon SNS topic.

  10. B. AWS Security Hub insights are filters and groupings that facilitate the analysis. A find-
    ings group is not a definition in AWS Security Hub. The security standard refers to a list of
    security controls that AWS Security Hub can check. Integrations refers to the capability of
    information received from third-party security products or your own applications.


Chapter 5: Infrastructure Protection

  1. C. Statement I is wrong because a VPC is contained within an AWS Region. Statement II is
    right because a VPC can contain multiple AWS availability zones in a single region. Statement
    III is wrong because a subnet is contained within an AWS availability zone.

  2. B. Option A is wrong because the VPC router address in this subnet is 172.16.100.129
    (the first available address in the CIDR). Option B is correct because the DNS server in this
    subnet is 172.16.100.130 (the second available address in the CIDR). Option C is wrong

    Chapter 6: Data Protection 381


    because the first available address in the subnet is 172.16.100.132 (the fifth available address
    in the CIDR). Option D is wrong because you can assign 172.16.100.128/25 as a CIDR
    block in a VPC.

  3. A. Statement I is correct because both gateways allow Internet-outbound traffic in a VPC.
    Statement II is wrong because Internet gateways support only IPv4 and IPv6 whereas egress-
    only Internet gateways support only IPv6 traffic. Statement III is wrong because egress-only
    Internet gateways do not support NAT.

  4. D. Options A, B, C, and E are correct. Option D is not correct because you cannot assign a
    security group to a NAT gateway.

  5. B. Statements I and III are correct configurations in security groups. Statement II is not
    possible because security groups only have allow rules.

  6. B. Option B is not correct because security groups are stateful; you therefore do not need to
    configure outbound rules for return traffic from permitted connections. Options A, C, and D
    are correct.

  7. A. Statement I is correct because these load balancers indeed support such features.
    Statement II is incorrect because NLBs do not support AWS Lambda functions as targets.
    Statement III is incorrect because CLBs are not restricted to EC2-classic implementations.

  8. C. Options A, B, and D represent traffic that, per definition, cannot be monitored via VPC
    flow logs. Option C is correct because traffic from a generic Windows instance can be moni-
    tored via VPC flow logs.

  9. D. Options A, B, C, and E represent valid parameters on AWS WAF Web ACL rules.
    Option D is incorrect because AWS WAF can detect HTTP request headers, but not HTTP
    response headers.

  10. B. Option A is incorrect because only AWS Shield Advanced offers support of the AWS
    DDoS response team (DRT). Option C is incorrect because AWS Shield Standard is enabled
    by default. Option D is incorrect because AWS Shield Advanced is enabled by default.


Chapter 6: Data Protection

  1. B, C, D. There are three possible options for encrypting data using S3 buckets: SS3-S3,
    SSE-KMS, and SSE-C. ACM is not a valid option in this case and can work only with
    asymmetric certificates and not symmetric encryption.

  2. A. There are no new charges for using SSE-S3 (server-side encryption with S3 managed
    keys). The S3 buckets do not have a default encryption method. When you create a bucket,
    by default it is private, but there is no default encryption defined.

    382 Appendix A Answers to Review Questions


  3. B, C, E. When you are defining a CMK, you must define three levels of access:

    The AWS root account level of access to the CMK, the IAM roles or users that have
    admin rights, and the IAM roles or users that have access to use the keys to encrypt and
    decrypt data.

    It is also important to remember that you cannot use IAM groups inside the CMK JSON
    security policy.

  4. C. AWS KMS enforces a minimum of 7 days up to a maximum of 30 days (default configu-
    ration) waiting period before you can delete a CMK.

  5. D. AWS Key Management Service (KMS) allows you to encrypt data natively, in addition to
    the ability to integrate with more than 50 available services in the AWS Cloud.

  6. A, C. Usually, the use of CloudHSM is directly related to meeting regulatory needs, such as
    FIPS 140-2 Level 3 standards. Another widespread use case is to protect web applications’
    private keys and offloading SSL encryption.

  7. C. When you enable automatic key rotation, AWS KMS rotates the CMK every 365 days
    from the enabled date, so once a year automatically. This process is transparent to the user
    and the environment. The AWS managed keys are the default master keys that protect the S3
    objects, Lambda functions, and WorkSpaces when no other keys (customer managed keys
    [CMKs]) are defined for these services.

  8. C, D. The AWS managed keys are the default master keys that protect the S3 objects,
    Lambda functions, and WorkSpaces when no other keys are defined for these services.

  9. D. CMK is a 256-bit Advanced Encryption Standard (AES) for symmetric keys that
    has a unique key ID, alias, and ARN and is created based on a user-initiated request
    through AWS KMS.

  10. C. AWS KMS natively uses HSMs to protect the master keys, but these HSMs are not dedi-
    cated to a single client, so it is a multitenant HSM.


Chapter 7: Incident Response

  1. B. If an AWS IAM credential is leaked, the best practice is to revoke it as soon as you
    detect the compromise. Only then you should modify the apps, and once the functionality
    is restored, you can proceed with the deletion of the credentials. It is important to keep the
    security contact on your AWS account updated, but it is not the more critical action to exe-
    cute immediately after detecting a possible compromise.

  2. D. Using AWS Config rules and the remediation feature with an AWS Systems Manager auto-
    mation document is an effective way to remediate a deviation from a compliant configuration
    affecting a monitored resource. AWS CloudTrail helps with the detection phase of the suspi-
    cious activity.

    Chapter 7: Incident Response 383


  3. B. You should keep all of the contacts for your AWS account updated because as they receive
    important notifications, some of them will require action to keep your account and services
    in good standing. Administrative is not a contact definition inside the AWS account; billing,
    operations and security are the alternate contact options within an AWS account.

  4. B. You should constantly check and immediately review every abuse report you receive from
    AWS. To avoid any disruption, reply to the report as soon as possible explaining the actions
    you plan to take so that the AWS team is aware you are executing your incident response.
    Keep them informed until the incident is resolved.

  5. C. Although all answers are valid mechanisms for developing a sound incident response plan,
    the security incident response simulations are specifically oriented to minimize the risk when
    testing your plan.

  6. C. AWS Config is the most appropriate service to check if there were changes to an AWS
    resource. AWS CloudTrail gives you information of executed actions, Amazon Macie helps in
    the classification of information in Amazon S3 buckets (and detecting failures in protecting
    that information), and AWS Trusted Advisor informs you about your implementation of best
    practices.

  7. B. First, the security team should be able to assess the criticality of the incident. A backdoor
    report in the production environment requires immediate attention. Isolation is part of the
    reaction, but it should be complemented with the root cause analysis of the incident to detect
    the blast radius.

  8. D. The user had an IAM policy attached without the administrator’s knowledge, which
    is suspicious. So, the first action is to delete the suspicious user and check other possible
    actions taken by that user. Removing the policy or changing access keys does not reme-
    diate the issue.

  9. A. If you detect a suspicious activity (in this case an Amazon EC2 instance using a Tor client
    without a valid business need to do so), your next step is to try to isolate the incident; then
    you will contain, remediate, recover, and do a forensic analysis. Terminating the instance will
    not allow a comprehensive forensic analysis. Traffic mirroring is not effective since Tor clients
    use encryption to connect to the anonymization network. It is not a best practice to ignore

    a finding without further investigation (in addition, the finding reports an outbound traffic
    using a Tor client, not the other way around).

  10. A, B, C. The constituent factors of an incident response plan are people, technology, and
    processes. Instructing the developers deals with
    people. Enabling Amazon Macie to detect
    access keys on Amazon S3 buckets deals with
    technology (use relevant AWS services). Using
    the right support plan helps with the
    processes that deal with incident. Although you can
    create an AWS Lambda function to check repositories, it is not the simplest way to do

that. Amazon CodeGuru helps in checking code quality (such as discovering inappropriate
handling of credentials) but not in mitigating the risk.

384 Appendix A Answers to Review Questions


Chapter 8: Security Automation

  1. B. Trusted Advisor and Config can be used to detect buckets with public access, but they are
    not designed to detect access from a sandbox account to an Amazon S3 bucket in the produc-
    tion account. For that, IAM Access Analyzer can be used. The question requested an analyst
    to have the final say on whether to close the bucket, so AWS Security Hub’s action should be
    used instead of fully automated remediation. AWS Config rules can also be used with manual
    remediation action, but option D is not valid as it specifies “auto-remediate” and this option
    would execute the remediation automatically.

  2. A, C. This question requires two things, not only to revert a change that removes default
    encryption, but also to go through all the Amazon S3 buckets and enable default encryption
    for them—that’s why only acting upon AWS CloudTrail is not enough.

    Option B is not valid, since AWS Config executes SSM Automation documents to auto-
    remediate, not AWS Lambda functions. It can be done indirectly using Amazon Cloud-
    Watch Events, but it’s not specified in the option.

    AWS Security Hub’s PCI conformity pack includes a similar check to the Config rule
    (
    securityhub-s3-bucket-server-side-encryption-enabled-898de88c), and
    it’s possible to write AWS Lambda code to enable the Amazon S3 bucket encryption. AWS
    Config natively includes a rule (
    s3-bucket-server-side-encryption-enabled) that
    can detect what was requested, and with the native SSM Automation for auto-remediation
    (
    AWS-EnableS3BucketEncryption) it can be corrected automatically.

  3. C. The difference between question 2 and question 3 is that question 3 asks for you to solve
    it “with the least effort.” While AWS Security Hub’s PCI conformity pack includes the check,
    writing the AWS Lambda function to enable the Amazon S3 bucket encryption requires more
    effort than using the native AWS Config rule (
    s3-bucket-server-side-encryption-
    enabled
    ) with the native SSM Automation for auto-remediation (AWS-EnableS3Bucket-
    Encryption
    ).

  4. A, C. Option B is not correct because AWS Security Hub should receive only findings, not all
    logs. It’s not a SIEM solution.

    Option D is not correct because this option would automatically isolate and it’s not the
    intended end result.

    Both options A and C are ways to collect the events, allow custom rules (for example, if
    a certain number of events arrive), then create the finding, and to simplify and accelerate
    the security analyst’s task of isolation.

  5. A, D. To accomplish what was requested, Amazon GuardDuty should be used as described in
    option A for production accounts so that a security analyst analyzes the impact of stopping
    an instance before doing it and, as described in option D, for nonproduction accounts. AWS
    Config detects changes on configurations, and Trusted Advisor does not include any checks
    that could detect crypto-mining.

  6. B, C. AWS Config can detect configuration drifts and auto-remediate, and Systems Manager
    State Manager can also accomplish similar results for configurations within the instances
    through a State Manager association.

    Chapter 9: Security Troubleshooting on AWS 385


  7. B, C, E. Amazon Inspector sends vulnerability findings to AWS Security Hub, Amazon
    GuardDuty sends its findings (potential threats) to AWS Security Hub, and Amazon Macie
    sends weak configurations on Amazon S3 buckets to AWS Security Hub. The other services
    currently do not offer any native integration to send findings to AWS Security Hub.

  8. A. AWS Config has a native rule called restricted-ssh that checks whether security
    groups that are in use disallow unrestricted incoming SSH traffic. Editing that rule, adding
    a remediation action called
    AWS-DisablePublicAccessForSecurityGroup, setting
    Auto-Remediation to Yes, and specifying GroupId as the Resource ID parameter with a role
    that can be assumed by
    ssm.amazonaws.com can detect and auto-remediate.

    Option B is not correct because Trusted Advisor can’t auto-remediate.

    Option C is not correct because AWS Config executes SSM automations to remediate,
    not AWS Lambda.

    Option D is not correct. Session Manager can’t manage security groups, because it
    doesn’t require inbound open ports on security groups to access the OS.

  9. C, D, E. Option C is correct because the Auto Scaling group will regenerate the instance,
    rebuilding from the launch template and ensuring that the malware is gone.

    Options D and E are correct since they would help mitigate the risk of getting infected.
    Option A is not correct. Running a malware scan doesn’t rebuild the instance as was
    requested.

    Option B is not correct because the steps described do not ensure that the mal-
    ware is gone.

  10. B. Option A is not correct because there is no “Changes In Bucket Properties” within
    Amazon S3 Events.

Option B is correct because there is a native rule on AWS Config called s3-bucket-
versioning-enabled
to ensure that versioning is enabled on Amazon S3 buckets, and
there is a remediation action called
AWS-ConfigureS3BucketVersioning that can
enable versioning.

Option C is not correct because remediation would not be fully automated.

Option D is not correct because it doesn’t make sense to create a custom AWS Config
rule when there is a native rule that does what’s needed.


Chapter 9: Security
Troubleshootingon AWS

  1. A. AWS CloudTrail is a service that records all API calls performed on your AWS account.
    The service provides the event history of AWS account activity, including actions performed
    by the Management Console, the AWS SDKs, the command-line tools (CLI), and other
    AWS services.

    386 Appendix A Answers to Review Questions


  2. B. Amazon CloudWatch Logs can be used to monitor, store, and access log files from EC2
    instances, AWS CloudTrail, Route 53, and other sources. This service is used to centralize the
    logs of other services and applications in a robust, scalable, and managed solution.

  3. A, B. Two components can be used to protect resources within the VPC: the security group
    and network ACLs (NACL).

  4. B. NAT gateways are used in NAT to allow instances on a private subnet to connect to the
    Internet or other AWS services and to prevent the Internet from initiating a connection to
    those instances.

  5. B. An Internet gateway is a redundant and highly available component of the VPC that
    allows communication between the instances in the VPC and the Internet. In this case,
    starting the connection from the Internet is allowed.

  6. C. VPC peering is a network connection between two VPCs that allows you to direct traffic
    between them using private IPv4 or IPv6 addresses. Instances in any VPC can communicate
    with each other as if they were on the same network. You can create a peering connection
    between your VPCs or with a VPC from another AWS account. VPCs can also be in differ-
    ent regions.

  7. D. VPC Flow Logs is a feature that makes it possible to capture information about IP
    traffic on VPC network interfaces. Flow log data can be published to Amazon CloudWatch
    Logs and Amazon S3. When creating a flow log, you can use the standard flow log record
    format or specify a custom format; the custom format is only available for publication on
    Amazon S3.

  8. A, C, D. STS:AssumeRole is used when you need to assume a specific role after authen-
    ticating with your AWS account. The
    STS:AssumeRoleWithWebIdentity API is used
    in cases of federation with OpenID Connect (OIDC) providers such as Amazon Cognito,
    Login with Amazon, Facebook, Google, or any OIDC-compatible identity provider. The

    STS:AssumeRoleWithSAML API is used to assume a role when authenticated by a SAML-
    compliant service or provider, such as Active Directory.

  9. B. A permissions boundary is a limit of permissions used to define the maximum permissions
    that a policy can grant to an IAM entity. An entity’s permission limit allows the entity to per-
    form only the actions permitted by both identity-based policies and their permission limits.

  10. A. AWS Key Management Service (AWS KMS) is a managed service that facilitates the
    creation and control of the encryption keys used to encrypt your data. The customer master
    keys that you create in AWS KMS are protected by hardware security modules (HSMs).

    AWS KMS is integrated with most AWS services that enable the encryption of your data.
    AWS KMS is also integrated with AWS CloudTrail to provide logs of usage of encryp-
    tion keys to help meet your audit, regulatory, and compliance requirements.

    Chapter 10: Creating Your Security Journey in AWS 387


    Chapter 10: Creating Your Security
    Journey in AWS

    1. C. Security groups are native features that provide stateful network segmentation rules. Net-
      work ACLs are stateless, AWS Marketplace solutions are not considered native services, and
      AWS WAF is specific to web application attacks.

    2. E. You will have to rely on third-party solution to deploy HIPS in your environment. Neither
      of the AWS services provides this functionality as of this writing.

    3. B. None of the AWS services provide features that allow deep packet inspection. Therefore,
      you should rely on third-party solutions from AWS Marketplace.

    4. D. AWS IAM and AWS CloudTrail address authentication, authorization, and accounting
      in AWS environments, whereas AD integration further approximates the solution to the
      described on-premises scenario. AWS Organizations is focused on providing hierarchy of
      AWS accounts, and the other services do not exist.

    5. E. AWS accounts provide the strongest way to separate lines of business (LoBs) administra-
      tively. All the other options are possible within the same AWS account, making it possible
      that a root account, for example, can influence on multiple LoBs.

    6. A. AWS Config basic rules can identify the ports open on Amazon EC2 instances.

    7. D. AWS Security Hub aggregates, organizes, and prioritizes your security alerts, or findings,
      from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, Amazon Macie,
      AWS Identity and Access Management (IAM) Access Analyzer, and AWS Firewall Manager.

    8. C. From all the options, AWS CloudFormation is the most appropriate service to deploy
      infrastructure as code. An AWS CloudFormation template describes the resources that you
      want to provision in your AWS CloudFormation stacks. Therefore, the company’s security
      team should definitely look for insecure configurations on these elements.

    9. B, D. AWS Lambda and AWS Step Functions are serverless technologies that can provide
      incident response automation.

    10. C. The security wheel model recognizes that the security practice has a continuous and
      cyclical nature and is structured in five basic stages: develop a security policy; implement
      security measures; monitor and respond; test; and manage and update. The zero-trust model
      relies on the principle of least privilege whereas the Attack Continuum model establishes
      mechanisms for three different periods: before, during, and after the attack. Absolute security
      and 3-phased migration are not security practical models.


      

      Mock Tests

      This chapter contains two mock tests that will simulate taking a real-world AWS Certified
      Advanced Networking - Specialty exam. The questions have been written to test the same
      domains in equal proportions as the real exam. The sample questions here are also
      designed to simulate the difficulty level of the real-world exam. You should be able to
      complete the exam within 90 minutes and achieve a score of 80% or above before
      attempting the real AWS Certified Advanced Networking - Specialty exam. Good luck!


      Mock Test 1


      1. You are connecting your on-premises environment to AWS. You need to connect
        to the
        us-west-2 region where all your services are located. Your on-premise
        environment is located in New York on the East Coast. You are required to create
        a cost-effective solution that will be highly available. Which of the following
        options would satisfy those requirements?


        1. Establish a Direct Connect link by using a provider that will connect
          your on-premises site with the us-west-2 region with a private link. The
          Direct Connect provider will ensure two connections are established,
          making the link highly available.

        2. Establish a VPN between AWS and your on-premises site with the
          us-west-2 region using a VGW. The VGW will provide you with two
          tunnel endpoints, making the link highly available.

          Mock Tests Chapter 12


        3. Establish two VPNs between AWS and your on-premises site with
          the us-west-2 region using two VGWs. The two VGWs will provide you
          with two tunnel endpoints, making the link highly available.

        4. Establish a Direct Connect link by using a provider that will connect
          your on-premises site with the us-west-2 region with a private link.
          Establish a backup VPN between AWS and your on-premises site with
          the us-west-2 region using a VGW. Having both a Direct Connect and a
          VPN connection established will make the link highly available.


      2. Your company has an existing environment with VPN connections across
        multiple sites. You are in charge of connecting an AWS VPC into the existing
        infrastructure. All the existing sites are using a GRE-based tunneled virtual
        overlay network that is terminated at the VPN gateway. What would be the most
        efficient way to integrate the VPC into the existing infrastructure?


        1. Create a VGW with the GRE encapsulation option enabled to connect
          into the existing network using the GRE tunnel.

        2. Redeploy the existing VPNs to IPSec. GRE is not supported on AWS.

        3. Deploy a CloudHub VPN environment and connect all your on-
          premise sites to CloudHub VPN.

        4. Create a custom EC2 instance or choose a solution from the
          marketplace.

      3. When deploying a VPC, what part of the shared responsibility is AWS
        responsible for?

        1. Making sure the VPC traffic is encrypted

        2. Making sure the VPC security groups are correctly configured

        3. Making sure the VPC internet gateway is highly available

        4. Making sure the VPC networks do not overlap with other clients


          [ 216 ]

          Mock Tests Chapter 12


      4. Your SysOps team has deployed a Linux server to a VPC. They have configured
        a security rule and a network access control list to allow only incoming traffic to
        port
        22. The SysOps team is unable to connect to the Linux server. They have
        tried redeploying the server to no avail. What is the most likely cause of the
        issue?

        1. The Linux server does not have the SSH service started.

        2. The network access control list policy for the outbound response is
          blocking the connection.


        3. The security group policy for the outbound response is blocking the
          connection.


        4. The Linux server firewall policy for the outbound response is
          blocking the connection.

      5. You have been tasked with establishing a VPN connection for a Linux server in
        your on-premise environment that needs to use rsync to continuously mirror the
        contents of a volume to an identical Linux server running in AWS. You set up the
        VGW and configure two of your own customer gateways with the tunnel
        information provided by AWS. When looking at the state of the connection, you
        can see that the state of the connection to each of the customer gateways is down.
        What do you need to do to ensure the state tunnel changes to
        UP?


        1. Ping the AWS Linux host from the on-premise Linux host. This will
          bring the tunnel up.

        2. Remove one of the tunnel connections. Only one VGW tunnel can be
          up at once.


        3. Ping the on-premise Linux host from the AWS Linux host. This will
          bring the tunnel up.

        4. Set the state of the tunnel in the AWS management console to UP.


          [ 217 ]

          Mock Tests Chapter 12


      6. You have set up a direct connection to AWS from a partner collocation. Your data
        center is located in the same metro area and you would like to extend the private
        link from the collocation to your on-premise environment. One of the
        requirements of your compliance is that your data never traverses the internet.
        What is the correct way to approach this problem?


        1. Consult with the AWS partner and see whether they can help you
          establish an optical or MPLS link between the collocation and your on-
          premise environment.

        2. Use a VPN between your on-premise environment and your
          customer device in the collocation to establish a virtual private last mile
          link.

        3. Instead of locating your customer gateway in the collocation, move it
          to your on-premise environment and have the AWS partner terminate
          the Direct Connect connection directly in your on-premise data center.


        4. Consult with AWS Support and see whether they can help you
          establish an optical or MPLS link between the collocation and your on-
          premise environment.


      7. Your VPC consists of both public and private subnets. The public subnets run the
        web frontend, and the private subnets run the backend and databases. To ensure
        greater security, a recommendation was given by the security team to remove the
        public subnets and put all the web frontends in the private networks behind load
        balancers. You deploy the public subnets and move the web servers to the
        private ones, but now you cannot reach the DynamoDB table that the web
        servers use to sync their sessions. What is the simplest way to allow the
        application to share cache information in DynamoDB?


        1. Set up a NAT gateway. The NAT gateway will route the traffic to the
          public endpoint for DynamoDB.


        2. Set up a VPC interface endpoint for DynamoDB. The interface
          endpoint will route the traffic to a private endpoint for DynamoDB.


        3. Set up a VPC gateway endpoint for DynamoDB. The gateway
          endpoint will route the traffic to a private endpoint for DynamoDB.

        4. Use ElastiCache instead of DynamoDB for session sharing.


          [ 218 ]

          Mock Tests Chapter 12


      8. You are designing a 10 GB Direct Connect link with a VPN backup over your 500
        MB internet uplink to provide high availability for your hybrid deployment.
        How can you ensure that the traffic is always going to be sent via the Direct
        Connect link and utilize the performance and low latency of the Direct Connect
        link, while failing over smoothly to the VPN if the Direct Connect link goes
        down?


        1. In the Direct Connect console, and by making the VPN connection
          secondary.

        2. In the Direct Connect console, and by making the Direct Connect
          connection primary.

        3. In the Direct Connect console, and by enabling Bidirectional
          Forwarding Detection on the Direct Connect connection.


        4. In the Direct Connect console, and by enabling BGP as the routing
          protocol.

        5. All of the above.

      9. Your company policy is to deploy each application into its own account in
        several different regions. Separate VPCs for development, testing, staging, and
        production are deployed in each account. A new global compliance requirement
        has been issued that will require the use of a centralized security VPC, which will
        contain services that need to be accessible from all VPCs, regardless of the
        account. VPC peering was recommended as the solution, and you are required to
        architect the design. What is the main concern when designing the solution?


        1. The VPCs are in separate accounts so this will not be possible. Use a
          VPN instead of VPC peering.

        2. The VPCs might have overlapping IP ranges.

        3. The applications deployed in the VPCs might not be compatible with
          VPC peering.


        4. The VPC peering connections only work within a region. Use a VPN
          instead of VPC peering.


          [ 219 ]

          Mock Tests Chapter 12


      10. You are troubleshooting a CloudFormation template that was designed by your
        DevOps team to deploy several VPCs for your test, your development, and
        production implementation. The VPC networks are as follows:

        Development VPC: 10.0.0.0/24Subnet 10.0.0.0/24

        Test VPC: 10.0.0.0/23Subnets 10.0.0.0/24 and 10.0.1.0/24

        Production: 10.0.0.0/16Subnets 10.0.0.0/16 and 10.0.1.0/16


        What is the reason for the CloudFormation deployment failing?

        1. The CloudFormation deployment is failing because the VPC ranges
          are overlapping. Change the development VPC to the
          192.168.0.0/24
          network and the Test to a 172.16.0.0/16 network so that the ranges
          do not overlap.

        2. The CloudFormation deployment is failing because you cannot
          deploy more than one VPC in one stack.

        3. The CloudFormation deployment is failing because the test VPC
          subnets should each have a suffix of
          /23.

        4. The CloudFormation deployment is failing because the production
          VPC subnets should each have a suffix of
          /24.


      11. Your company has migrated the database server to a Multi-AZ RDS database
        that is deployed in your VPC private subnet. The database admin has changed
        the IP address from the custom instance to the primary RDS instance. While
        testing failover, the database admin has noticed that the requests are still being
        sent to the primary instance. How can this be fixed in the easiest possible way?


        1. Create a script that pings the primary instance, and if it is not
          responding, it should switch the IP in the application configuration to
          the secondary instance.

        2. Use the RDS DNS name instead of the IP.

        3. Enable the RDS autoswitch-IP option in the Multi-AZ configuration.

        4. Use an Elastic Network Interface on the primary RDS to quickly
          switch to the secondary if there is a failure.


          [ 220 ]

          Mock Tests Chapter 12


      12. You have been tasked with designing a solution that will isolate the management
        workload from the customer traffic in your VPC. What would be the best
        approach to achieve the desired functionality?


        1. Implement a marketplace solution that will allow virtual separation
          of networks within AWS.

        2. This is not possible due to the design of the VPC.

        3. Use an ENI on each of your instances to connect the second network
          subnet designated for management.

        4. Check the Connect instances to management network option in the
          VPC configuration. This feature allows you to connect your instances to
          the built-in VPC management network in AWS.


      13. You are deploying a static website on an S3 bucket. You have decided to deploy
        it in the us-east-1 region. You test the performance of the website from different
        customer locations across the US and EU and find the performance is not
        adequate. What is the correct setup that will improve the performance of your
        application at the lowest possible cost?


        1. Mirror the S3 bucket to several US and EU regions so that the content
          is closer to your client locations.


        2. Enable a CloudFront distribution and point it to the bucket with the
          default price class (
          All CloudFormation locations - best performance)
          selected.


        3. Enable a CloudFront distribution and point it to the bucket with the
          price class 100 (
          North America and Europe) selected.


        4. Use an edge-optimized API Gateway to forward requests to the S3
          bucket for the static content.


          [ 221 ]

          Mock Tests Chapter 12


      14. You have been tasked to manage version control in CloudFormation. The
        production stack has already been deployed in AWS. What is the least invasive
        way to implement version control at this point?

        1. Use CloudFormation change sets.

        2. Save your templates to a version control repository. Use a CI server to
          redeploy the production stack.

        3. Use OpsWorks instead of CloudFormation.

        4. Use Elastic Beanstalk instead of CloudFormation.

      15. You are using an m5.2xlarge EC2 instance to ingest data from S3. The data is
        well distributed across tens of thousands of keys and reaches between 2 TB and 6
        TB each day. You have deployed a VPC endpoint and ensured enhanced
        networking is enabled on the instance. Your developers are complaining that the
        data ingestion keeps failing to include data on days when the volume of data is
        higher than 4 TB. How could you ensure that the data is ingested in the most
        efficient manner possible?

        1. Use one m5.4xlarge instance to ingest the data.

        2. Use two m5.2xlarge instances to ingest the data.

        3. Use three m5.xlarge instances to ingest the data.

        4. Use four m5.large instances to ingest the data.

      16. An application behind a load balancer is being deployed in your environment.
        Your manager is extremely wary about any DDoS attacks against your
        application. Your manager has asked you to recommend a solution to secure
        your application against DDoS attacks with the most comprehensive approach to
        DDoS mitigation. Which option would you recommend?

        1. Use the AWS WAF custom IP rules to implement DDoS mitigation.

        2. Use AWS Shield Advanced on your ELB to implement DDoS
          mitigation.

        3. Use NACL custom IP rules to implement DDoS mitigation.

        4. Use an AWS Marketplace solution to implement DDoS mitigation.


          [ 222 ]

          Mock Tests Chapter 12


      17. You are running an HPC workload on EC2 instances deployed in a cluster
        placement group. You have enabled enhanced networking and are looking to get
        the maximum performance out of the network. Which additional feature would
        enable you to get the most out of this setup?


        1. Open up your ACLs and security groups to inbound:ALL and
          outgoing:ALL. This will help speed up the traffic as no checks will be
          done on the packets.


        2. Set the VPC jumbo frame setting to on startup. Jumbo frames will
          add increased performance to your placement group.


        3. Deploy your instances in a spread placement group. This will spread
          the traffic over multiple network devices.

        4. Set a higher MTU in your operating system. Jumbo frames will add
          increased performance to your placement group.

      18. You have a cluster of three EC2 instances with public IP addresses. The public
        IPs are mapped to a Route 53 DNS name for your application. Your application is
        slowing down and you need to increase the instance size to be able to
        accommodate the traffic increase. You power the instances off, change the size,
        and power them on. The instances pass the health checks and you can SSH into
        them but the application is still not available. What would be the reason?

        1. Redeploy the Route 53 public zone.

        2. Restart the instances; the services did not come up correctly.

        3. Restart the services in the instance to make sure they come up
          correctly.


        4. Public IPs have changed when the instances were shut down.
          Reconfigure the DNS name in Route 53.


          [ 223 ]

          Mock Tests Chapter 12


      19. When diagnosing a VPC Flow Log, you see the following entry:


        2 123123456456 eni-12d8da8 10.0.0.121 10.0.1.121 3321 22 6 14 3218

        1550695423 1550695483 REJECT OK


        What does this VPC Flow Log mean?

        1. Instance 10.0.0.121 tried to SSH to instance 10.0.1.121 on port

          22, but the connection was not allowed.

        2. Instance 10.0.1.121 tried to SSH to instance 10.0.0.121 on port

          22, but the connection was not allowed.

        3. Instance 10.0.1.121 established an SSH connection to instance

          10.0.0.121 on port 22.

        4. Instance 10.0.0.121 established an SSH connection to instance

          10.0.1.121 on port 22.


      20. You have peered VPC A with VPC B and VPC C with VPC A. Services in VPC B
        would require communication with VPC C. What options do you have to enable
        this? (Select all that apply.)

        1. Proxy the VPC B <-> VPC C traffic in VPC A.

        2. VPC A is a middle point where routing can be created to allow traffic
          to pass.

        3. Peer VPC B and VPC C.

        4. Use the CloudHub VPN to connect the VPCs.

        5. None of the above.

      21. You have peered VPC A with VPC B. You try to ping the other side but there is
        no response. What could be the problem?

        1. You need to enable the ICMP protocol on the peering link.

        2. The routes for VPC A have not been created in VPC B and vica versa.

        3. You need to wait 15 minutes for the automatic route propagation
          from VPC A to propagate to VPC B and vica versa.

        4. The target on the other side is not available.


          [ 224 ]

          Mock Tests Chapter 12


      22. Company A is using a 10 GB Direct Connect link to store petabytes of data in S3
        with a public VIF over HTTP. They have employed you to secure the data being
        transferred across the Direct Connect link with encryption. What would be the
        best option to encrypt all the data in transit immediately?


        1. Use HTTPS when connecting to S3. This will encrypt the data in
          transit.

        2. Deploy an IPSec VPN on the VIF. This will encrypt the data in transit.

        3. Use client-side encryption when uploading to S3. This will encrypt
          the data in transit.

        4. Nothing needs to be done. Data is encrypted automatically over
          Direct Connect.


      23. Your company uses a gateway endpoint for all the private subnets to connect to
        the S3 service. The implementation consists of custom route tables for each
        subnet in the environment. Another network admin created the setup. There is a
        requirement for a new private subnet, and you are tasked to deploy it. Once
        deployed, the EC2 instances in the new subnet cannot access the S3 service. What
        is the solution?

        1. Create a new VPC gateway endpoint in that subnet.

        2. Create a new VPC interface endpoint in that subnet.

        3. Create a new security entry for the new subnet in the S3 gateway
          security policy.

        4. Create a new entry for the S3 gateway in the route table of the
          subnet.


      24. You are deploying a VPN solution from the AWS marketplace on an EC2
        instance. How can you ensure that the instance has optimal performance? (Select
        two.)

        1. Use IOPS-optimized EBS volumes.

        2. Use an instance type with lots of memory.

        3. Use an instance type with the appropriate amount of network
          throughput.

        4. Enable enhanced networking on the instance.


          [ 225 ]

          Mock Tests Chapter 12


      25. To ensure the highest network performance between two EC2 instances, which of
        the following would you select?

        1. Start the instances at the exact same time.

        2. Start the instances in a cluster placement group.

        3. Start the instances in a spread placement group.

        4. Start the instances in a network placement group.

      26. You have set up two VPN connections with two VGWs to allow for aggregating
        the performance of multiple VPNs using BGP. You have created AS_PATHs of
        the same length for each VPN, but the traffic to your network seems to prefer one
        VPN over the other. What would make one VPN be preferred over another?


        1. The ASN of the preferred VPN is lower than the ASN of the second
          one.


        2. The MED property on the preferred VPN connection is higher than
          the second one.

        3. The second VPN is still configured as static.

        4. The prefix advertised on the preferred VPN is more specific than the
          second one.

      27. While building a highly available application in AWS, a compliance requirement
        has been determined that requires end-to-end encryption to be implemented.
        Which solution would allow for implementing end-to-end encryption while
        remaining highly available?


        1. An ELB with SSL offloading serving an HTTPS endpoint. Highly
          available EC2 instances in an autoscaling group. An SSL-encrypted
          Multi-AZ RDS MySQL backend.

        2. Two ELBs with serving two HTTPS endpoints. Two EC2 instance
          autoscaling groups. An SSL Encrypted RDS MySQL backend.


          [ 226 ]

          Mock Tests Chapter 12


        3. An ELB with SSL offloading serving an HTTPS endpoint. Two EC2
          instance autoscaling groups. An SSL Encrypted Multi-AZ RDS MySQL
          backend.

        4. An ELB serving an HTTPS endpoint. Highly available EC2 instances
          in an autoscaling group. An SSL Encrypted Multi-AZ RDS MySQL
          backend.


      28. A development team is planning on implementing blue-green deployment in
        their environment. Which service could you use to enable blue-green
        deployments?

        1. Use Route 53 with weighted routing.

        2. Use Route 53 with latency-based routing.

        3. Use the API Gateway with weighted routing.

        4. Use the ELB with latency-based routing.

      29. Your team has written a CloudFormation template that deploys the VPC, the
        subnets, the IGW, and routing. The CloudFormation template syntax is correct
        and the template starts deploying. The template fails when creating the subnets.
        What could be the cause?


        1. The subnets are listed before the VPCs. Move the subnets down so
          they get created after the VPC.

        2. The resources are being created in parallel. Remove the In-Parallel

          tag on the subnets.

        3. The resources are being created in parallel. Add a Depends-On

          condition for the subnets.

        4. The resources are being created in parallel. Add a Depends-On

          condition for the VPC.


          [ 227 ]

          Mock Tests Chapter 12


      30. There is a Direct Connect connection set up with your on-premises environment.
        You are performing a high availability review that includes the connection. What
        recommendation would you give? (Choose all that apply.)

        1. No action is needed. The connection is highly available by default.

        2. Recommend setting up a backup VPN.

        3. Recommend setting up a backup Direct Connect link.

        4. Recommend setting up a backup connection via an ELB.

        5. Use an API Gateway.

      31. Your company is deploying an authentication mechanism that will be used
        across all applications in AWS. The authentication application is hosted in a VPC
        with the IP address range 10.17.0.0/16. There are 11 other VPCs in your AWS
        account, and there are 3 more accounts with another 5, 8, and 12 VPCs each. You
        need to ensure a setup that will allow all the other VPCs to be able to use the
        authentication application. Considering that you are working across different
        accounts, which solution would you recommend?


        1. Use a marketplace solution to set up an overlay network across all
          your VPCs that will allow communication across all your VPCs.

        2. Use VPC peering and peer all the VPCs to each other.

        3. Use a marketplace solution to set up a VPN between your on-premise
          environment and all the VPCs. Route the traffic through your on-
          premise environment.

        4. Use VPC peering and peer all the VPCs to only the authentication
          VPC.


      32. You have recently deployed an application to an EC2 cluster behind an ELB. You
        have an autoscaling group that scales between 8 and 48 instances. The
        application accepts HTTP connections, but due to a security review, you need to
        implement HTTPS on the internet portion of your application within the shortest
        time possible. Which solution would you recommend?


        1. Install an HTTPS certificate onto the instances and reconfigure the
          application to serve HTTPS.

        2. Use ACM to install a certificate on the ELB.


          [ 228 ]

          Mock Tests Chapter 12


        3. Move your application to S3 and serve the content directly from S3
          via HTTPS.

        4. Use CloudFront and terminate all the calls at CloudFront via HTTPS.


      33. You have EC2 instances in four regions: us-west-2, us-east-1, eu-west-1, and ap-
        northeast-1. The instances download 1 TB of data daily from each input S3
        bucket in their local region, and create a 100 Mb aggregate report to a new output
        S3 bucket within their region. You are connecting to the AWS via Direct Connect
        from your on-premise site to us-west-1, and you download the aggregate reports
        hourly from each region and upload the new 1 TB source for the jobs each day.
        During which part of the communication will you incur transfer charges?

        1. Downloading from S3 to the EC2 instances

        2. Uploading from the EC2 to the output S3 buckets

        3. Downloading from the output S3 buckets to your on-premise via
          Direct Connect

        4. Uploading to input buckets via Direct Connect

      34. You are securing an application running in AWS. You need to identify the best
        practices for security for your application. (Select all that apply.)


        1. Reduce the attack surface of your application by removing any
          unnecessary entry points.


        2. Reduce the size of your application to the minimum number of
          instances.

        3. Implement security at all levels.

        4. Leverage AWS security features only.

        5. Implement detailed monitoring of your resources.

        6. Select the appropriate security controls for your application. Not all
          security features are applicable in all cases.


          [ 229 ]

          Mock Tests Chapter 12


      35. You are in charge of securing an application serving mobile clients behind an
        application load balancer. You are required to be able to control the traffic based
        on the IP address of the request and based on the expression used in the request.
        Which AWS solution could you implement to get the appropriate level of
        control?

        1. NACLs

        2. WAF

        3. Shield

        4. X-Ray

      36. Your company has several accounts with consolidated billing. You are setting up
        an AWS Direct Connect connection with a Private VIF in one of the accounts.
        Where would the charges for any downloads across the Direct Connect be
        recorded?

        1. In the main AWS account that has the billing.

        2. In the account where the AWS Direct Connect was created.

        3. The are no download charges when using Direct Connect.

        4. The charges are split across the main AWs account and the
          subaccount, depending on the origin of the download request.

      37. When planning to deploy a Direct Connect link, which features should your
        customer router support?

        1. Single Mode Fiber

        2. 802.1Q VLAN

          C. 802.11ac

          D. Multi Mode Fiber

          E. BGP with MD5 authentication


          [ 230 ]

          Mock Tests Chapter 12


      38. Your company has set up a Direct Connect connection that uses several different
        public and private VIFs to enable a connection with different services. A
        requirement for a new, L2-isolated and encrypted connection to a new VPC has
        been expressed by the PCI team, and you have been assigned to set this up. What
        would be the correct approach to do this?


        1. Deploy a new public VIF. Create a new VLAN to the new VIF. The
          connection will be encrypted with IPSec.


        2. Deploy a new private VIF. Create a new VLAN to the new VIF. The
          connection will be encrypted with IPSec.


        3. Deploy a new public VIF. Create a new VPN to the new VIF. The
          connection will be encrypted with IPSec.

        4. Deploy a new private VIF. Create a new VPN to the new VIF. The
          connection will be encrypted with IPSec.


      39. You have been tasked with performing deep packet analysis on the VPC traffic.
        What would you use?

        1. VPC Flow Logs

        2. A third-party packet analyzer

        3. AWS WAF

        4. AWS CloudTrail

      40. You have started up an instance in a private subnet in a VPC. You try and
        connect an elastic IP to the instance, but are unable to do so. Why?

        1. An IGW is not attached to the subnet.

        2. An ENI is not attached to the instance.

        3. A NAT gateway is not attached to the subnet.

        4. A Public IP is not attached to the instance.


          [ 231 ]

          Mock Tests Chapter 12


      41. You have deployed a VPC with an IPv6 network. Now, you need to remove the
        IPv4 network. What steps do you need to take?


        1. Take the VPC offline and remove the IPv4 range in the management
          console.

        2. Remove the IPv4 range in the management console.

        3. Remove the IPv4 range in the CLI.

        4. This cannot be done.

      42. You are diagnosing the traffic flow from a VPC-enabled ECS container and need
        to understand which requests were accepted and rejected by this container. What
        can you do?


        1. Use VPC Flow Logs on the ENI of the container. Look for ACCEPT
          OK and REJECT OK entries in the log.

        2. Since ECS containers do not have ENIs, use VPC Flow Logs on the
          VPC network. Look for ACCEPT OK and REJECT OK entries in the log
          that point to the IP address of the container.


        3. Use VPC Flow Logs on the subnet. Look for ACCEPT OK and
          REJECT OK entries in the log.

        4. This cannot be done.

      43. You are deploying an IPv6-only private subnet. To update the instances software,
        you are looking to deploy a NAT gateway for this subnet. Which option would
        you choose?

        1. Use the NAT gateway and select Enable IPv6 on creation.

        2. Use an egress-only internet gateway instead.

        3. Use an internet gateway instead.

        4. Use a virtual private gateway instead.


          [ 232 ]

          Mock Tests Chapter 12


      44. You are creating a VPN from on-premise to AWs. Which firewall rules are
        required on the client side to connect a VPN? (Choose all that apply.)

        1. UDP port 50

        2. UDP port 500

        3. TCP port 50

        4. TCP port 500

        5. Protocol 50

        6. Protocol 500

      45. A Lambda function needs to interact with an EC2 instance in a private subnet.
        Once the exchange of information is complete, the Lambda needs to store the
        data in a DynamoDB table. How can you enable this? (Select all that apply.)


        1. Deploy the lambda into the VPC. This will assign the lambda
          function a private IP to access the EC2 instance.


        2. The lambda will automatically have access to DynamoDB since
          Lambda has a public endpoint.

        3. Use a NAT gateway to allow access outside of the VPC.

        4. Use a DynamoDB VPC endpoint.

        5. Deploy a Route to the Lambda service in the private VPC.

      46. How can you easily automate VPC peering in your AWS account for any newly
        created VPCs?

        1. Use an Elastic Beanstalk application with the peer-VPCs setting.

        2. Use a Lambda function to detect any newly created VPCs and peer
          them.

        3. Use a CloudFormation template and define peering in the template.

        4. This cannot be done.


          [ 233 ]

          Mock Tests Chapter 12


      47. Your EC2 instance is acting as an origin for a CloudFront distribution. You need
        to maintain end-to-end encryption of your traffic at all times. Which options can
        you configure on CloudFront to ensure end-to-end encryption?


        1. Set the viewer policy to redirect HTTP to HTTPS. Set the origin
          policy to match viewer.


        2. Set the viewer policy as HTTPS. Install an SSL certificate on the
          instance.

        3. Set the viewer policy as HTTP. Set the origin policy to match viewer.

        4. Set the viewer policy to redirect HTTP to HTTPS. Set the origin to
          HTTP.


      48. To create a private VIF for a VPN on a Direct Connect connection, which of the
        following is required?

        1. The on-premise subnet ID

        2. The VLAN ID

        3. The VGW ID

        4. The IGW ID

      49. In CloudFront, to optimize the performance of your application but still maintain
        PCI compatibility, which of the following can you use?

        1. End-to-end encryption with SSL

        2. Field-level encryption and SSL

        3. End-to-end encryption with SSL offloading

        4. Field-level encryption and SSL offloading

      50. You have a CRM application running in your VPC. The setup has the following
        components:

        VPC with CIDR 10.0.0.0/16

        Subnets: A 10.0.0.0/24, B 10.0.1.0/24, C 10.0.2.0/24, D 10.0.3.0/24,

        E 10.0.4.0/24 and F 10.0.5.0/24two in each AZ

        A VGW with the ID vgw-ad83aa7f


        [ 234 ]

        Mock Tests Chapter 12


        A routing table with the following entries:

        10.0.0.0/16local

        192.168.18.0/24vgw-ad83aa7f

        192.168.19.0/24vgw-ad83aa7f

        A default NACL that ALLOWS ALL traffic IN and OUT

        A default security group that ALLOWS ALL traffic OUT and DENIES
        ALL traffic IN

        A VPN security group that ALLOWS ALL HTTPS traffic IN from the
        192.168.18.0/24 network

        The CRM instances are deployed in all subnets


        Your CRM application requires access to S3. Which options would allow you to
        grant access to S3? (Select all that apply.)

        1. Create a NAT instance in subnet A.

        2. Attach a VPC endpoint to the VPC.

        3. Attach an IGW to subnet A.

        4. Create a NACL rule to allow access to S3 in the VPC.

        5. Create a security group rule to allow access to S3 and attach it to the
          CRM instances.

        6. Create a route in the default routing table to the VPC endpoint.

        7. Create a route in a new routing table to the VPC endpoint and attach
          it to subnets B, D, and F.


      51. You need to increase the performance of your application's read and write
        responses to the clients. Which service would you choose in your deployment to
        achieve that goal?

        1. CloudFormation

        2. CloudFront

        3. API Gateway

        4. S3


          [ 235 ]

          Mock Tests Chapter 12


      52. You have deployed your application behind a load balancer. You need to point
        your website, mywebsite.com, to the application using Route 53. How can you
        achieve this?

        1. Create an ALIAS record using the load balancer DNS.

        2. Create a CNAME record using the load balancer DNS.

        3. Create an A record using the load balancer IP.

        4. Create a PTR record using the load balancer IP.

      53. You are establishing a Direct Connect link with AWS. Your customer gateway
        has been delivered and installed to the collocation and is ready for the cross-
        connect to be established. Who do you need to contact to get the cross-connect
        established?

        1. Contact AWS support.

        2. Contact the Direct Connect provider.

        3. Contact an AWS partner that can consult and help in this case.

        4. Raise a Direct Connect request in the management console.

      54. You are connecting a Direct Connect to your on-premise site. The setup has over
        300 /24 networks that need to be advertised across the Direct Connect link to the
        VPCs. What do you need to do to enable the connection across the Direct
        Connect link?

        1. Create public VIFs for each /24 network, and set up a separate VGW.

        2. Set up over 300 VLANs on the Direct Connect link.

        3. Create over 300 route entries in the BGP configuration.

        4. Summarize over 300 prefixes into less than 100 prefixes.

      55. You are accessing the S3 bucket through a VPC endpoint. Your application keeps
        getting a 403 response from the bucket. You have checked the security groups,
        NACLs, and routes, and everything looks good. What is the solution to the
        problem?

        1. Enable enhanced networking on the EC2 instance.

        2. Ensure that the bucket name is resolving to the correct DNS name.


          [ 236 ]

          Mock Tests Chapter 12


        3. Ensure that the bucket policy allows access from the VPC.

        4. Enable the S3 ACL propagation to the VPC.


      56. You are deploying an application across regions. You need to be able to create
        the network configuration in a unified manner. Your company has chosen to
        build each part of the application with a separate CloudFormation template.
        What CloudFront feature would you use when deploying the stacks from these
        templates to correctly deploy to any region?


        1. When deploying the first stack, record the session ID. Use the session
          ID in the next stack.

        2. When deploying the first stack, record the outputs for use in the next
          stack.

        3. When deploying the first stack, export the outputs to the next stack.

        4. Use the AWS CLI to deploy; it will be simpler.

      57. You are building a VPC that will host a highly available application. The
        application is required to have three nodes that determine the state of the
        application by comparing hashes across the network. When designing the
        infrastructure, which of the following assumptions are not true? (Select all that
        apply.)

        1. The application can be deployed into any region.

        2. The application requires three subnets in three availability zones to
          be created for high availability.

        3. The application cannot be deployed into any region.

        4. The application subnets should use the same routing table.

        5. The application requires two subnets in two availability zones to be
          created for high availability.


          [ 237 ]

          Mock Tests Chapter 12


      58. When connecting 20 branch offices that have 10 to 20 employees, what is the
        easiest solution to use?

        1. VPN CloudHub

          B: IPSec VPNs with VGW

          C. Direct Connect

          D. NACL

      59. You have a requirement for highly available EC2 instances running in two VPCs
        to connect a mesh tunnel network with each other. The instances are deployed in
        VPC A with network 10.0.0.0/20 in us-west-1, and VPC B with network
        10.0.0.0/16 in eu-west-1. You have decided on a marketplace instance that is
        capable of creating an overlay network for the mesh. What else should you
        consider when setting up this deployment?


        1. Peer the VPCs instead of using the marketplace solution, and mesh
          the instances directly through the peering connection.

        2. Use the VGW instead of using the marketplace solution, and mesh
          the instances directly through the VPN.

        3. Implement a second marketplace instance.

        4. Use DNS with public IPs on all instances for redundancy.

      60. You are moving an application to AWS. The application requires sending
        broadcast packets to its peers to exchange the cluster state. When you deploy the
        cluster to the VPC, the cluster state on the instances is marked as unhealthy. Why
        could that be?

        1. The EC2 instances need to be deployed in a cluster placement group.

        2. The instances are in separate subnets, and broadcast packets are not
          being sent across the router.

        3. Broadcast is not allowed in the AWS VPC.

        4. The EC2 instances need enhanced networking to be enabled.


[ 238 ]

Mock Tests Chapter 12


Mock Test 2


  1. Your company has a requirement for a new Direct Connect link but the
    requirement is for a 100 Mb connection. Can this be achieved? (Select all that
    apply.)

    1. No. Direct Connect is only 1 GB or 10 GB.

    2. Yes. Use a Direct Connect Partner and ask for a 100 Mb hosted
      connection.

    3. Yes. In some cases, Direct Connect is available at 100 Mb.

    4. Yes. If you have a parent AWS account in the same company with an
      existing Direct Connect link, you can create a 100 Mb Hosted Virtual
      Interface for the subaccount.

  2. The on-premises network of your company uses a next-generation firewall to
    perform traffic analyses and to determine the application from the pattern of
    packets on a switch port in promiscuous mode. You are designing the same
    architecture in AWS. You have found the vendor and the appliance on the AWS
    marketplace. How can you implement the next-generation appliance in AWS?


    1. Deploy the appliance in the same VPC as your application's VPC
      with promiscuous mode enabled on the primary network adapter of the
      appliance.


    2. Deploy the appliance in a separate security VPC. Enable promiscuous
      mode on a secondary ENI of the appliance.


    3. Deploy the appliance in the same VPC as your application's VPC.
      Route all the traffic from the application VPC to the appliance.

    4. Deploy the appliance in a separate security VPC. Route all the traffic
      from the application's VPC to the appliance.


      [ 239 ]

      Mock Tests Chapter 12


  3. You are deploying a third-party WAF in front of your application. To make the
    deployment highly available, which approach would you use?


    1. Use an autoscaling WAF group in the same subnet as the web
      frontend. Proxy the requests to the EC2 instance IPs.

    2. Use a WAF sandwich between two ELBs.

    3. Use a WAF layer cake between two ELBs.

    4. Deploy a standalone WAF instance in a separate VPC. Route the
      traffic from the separate VPC to the EC2 instances.

  4. Your startup just got acquired by a multinational corporation. You are in charge
    of unifying the network infrastructure components. Both entities use multiple
    AWS accounts to deploy their application. The corporation that bought your
    startup is now enforcing the use of common authentication across all networks
    and applications. What would be the simplest way to enable your applications to
    authenticate to the corporate directory?


    1. Write an authentication broker running on two EC2 instances.
      Deploy one instance in the corporate directory VPC and one instance in
      your authentication VPC. Establish a secure way of exchanging
      credentials between the instances over the public IPs.

    2. Use VPC peering between your VPCs and the corporate
      authentication VPC.

    3. Establish a VPN between the accounts, as VPC peering is not
      supported across accounts.

    4. Move all your VPC resources into the corporate account.

  5. You are deploying a custom NAT instance inside your VPC. What would you
    need to do to make sure the traffic passes correctly from the private network to
    the internet?

    1. Propagate the private subnet routes on the NAT instances.

    2. Enable source destination check on the NAT instance EC2 settings.

    3. Disable source destination check on the NAT instance EC2 settings.

    4. Enable enhanced networking on the NAT instance EC2 settings.


      [ 240 ]

      Mock Tests Chapter 12


  6. You are in charge of setting up a Direct Connect link from your on-premise to
    AWS. The core requirement is to have maximum fault tolerance and lowest
    latency for your connection. Which option satisfies those conditions?


    1. One AWS Direct Connect link to one customer gateway, one virtual
      private gateway, and one backup VPN on a public VIF to a

      second customer gateway.

    2. Two AWS Direct Connect link to two customer gateways and one
      virtual private gateway.


    3. Two AWS Direct Connect links to two customer gateways and two
      virtual private gateways.

    4. One AWS Direct Connect link to one customer gateway, two virtual
      private gateways, and one backup VPN on a public VIF to a

      second customer gateway.

  7. A hybrid server environment is running between AWS and on-premises. For the
    on-premise servers to be able to simply resolve the DNS names of the EC2
    instances, which service would you use?

    1. A simple AD in your VPC.

    2. A replica AD server of your on-premises AD in your VPC.

    3. Route 53.

    4. The on-premise DNS in your AD.

  8. You have configured a health check on your Route 53 record for your Windows
    on-premise servers. The health check is reporting all those instances as unhealthy
    even though your application is working. What could be the cause?

    1. On-premise servers are not supported for health checks in Route 53.

    2. On-premise servers can be used, but only for the Linux operating
      system.

    3. Check the on-premise firewalls and see whether the traffic from
      Route 53 is allowed.

    4. Delete your Route 53 health checks and recreate them. Wait 15
      minutes for the records to sync.


      [ 241 ]

      Mock Tests Chapter 12


  9. You are deploying an HPC application in the AWS cloud. You need the
    application to communicate with your on-premise site. The servers in your on-
    premise environment have 32 CPUs, 64 GB of RAM, and 500 GB of locally
    attached SSD for scratch space. Each server has two 10 GbE adapters with the
    MTU of 9,000. You are planning to set up a VPC with a cluster placement group
    of EC2 instances and connect the deployment via a VPN with your on-premises
    servers. In the current setup, are there any considerations that need to be
    addressed?

    1. None, the setup will work fine.

    2. There aren't any instances in AWS that will match your on-premise
      server configuration closely enough.

    3. Using jumbo frames across VPN is not supported.

    4. High availability of the environment is reduced when using a cluster
      placement group.

  10. You are in charge of connecting your on-premise ETL environment to a data lake
    on S3. You need to download hundreds of terabytes of data from S3 in an
    efficient manner. What type of connection would you establish?

    1. A VPN between AWS and your on-premises.

    2. Connect directly to S3 over the internet.

    3. A Direct Connect link with a public VIF.

    4. A Direct Connect link with a private VIF.

  11. You have an application that communicates via IPv6. You would like to migrate
    the application to AWS. In this case, are there any considerations that need to be
    addressed?

    1. No, simply create a VPC with only an IPv6 address range.

    2. No, simply create a VPC with a primary IPv4 and a secondary IPv6
      address range.

    3. Yes, IPv6 is not supported in the VPC.

    4. Yes, IPv6 is only supported with enhanced networking instance
      types. Simply create a VPC with only an IPv6 address range and choose
      the correct instance type when deploying your application.


      [ 242 ]

      Mock Tests Chapter 12


  12. You are planning to deploy a Direct Connect connection with a VPN backup. To
    enable failover from Direct Connect to the VPN, what needs to be set up?

    1. Choose the correct AS_PATH prepending for both connections.

    2. Advertise a more specific prefix on the Direct Connect link.

    3. Advertise a more specific prefix on the VPN link.

    4. Advertise two identical prefixes on both links.

  13. In a hybrid environment, how can you make sure your EC2 instances can resolve
    the on-premise hosts?


    1. Create a DNS-link on Route 53 with your on-premise servers. This
      way, Route 53 will advertise your on-premise network DNS.

    2. Create an instance that will forward your DNS queries to on-premise
      servers. Set up DHCP options for your VPC to point to this instance.
      This way, the instance can resolve your on-premise network DNS.


    3. Create a private zone in Route 53. This way, Route 53 will advertise
      your on-premise network DNS.


    4. Create an instance that will forward your DNS queries to on-premise
      servers. Set up DHCP options for your subnet to point to this instance.
      This way, the instance can resolve your on-premise network DNS.

  14. When setting up an ELB, you are required to ensure that compliance procedures
    are followed. You decide to set up an
    application load balancer (ALB) that will
    automatically redirect HTTP to HTTPS. Your load balancer forwards all requests
    to your EC2 instances on port
    443. What else could you do to increase the
    security of your application data in transit?


    1. The ALB cannot redirect your HTTP to HTTPS. You need to set up a
      redirect target group that will redirect any requests to HTTPS on the
      load balancer.


    2. The backend instances should be listening on the HTTPS port, not
      port
      443.

    3. Block any HTTP requests on the ALB.

    4. Nothing. The setup is secure.


      [ 243 ]

      Mock Tests Chapter 12


  15. You have deployed CloudFront to cache public websites stored in an S3 bucket.
    You like the way this setup works, and you are looking to deliver some private
    content in the same way. What option would you choose to deliver a few specific
    private files from S3 via CloudFront?

    1. Use signed URLs.

    2. Use signed certificates.

    3. Use WebSocket connections.

    4. Use bucket policies.

  16. An application is running in EC2. A new compliance policy has been introduced
    that will require stricter security in the VPC. An administrator eager for a
    promotion has quickly introduced a new security group and NACL policy that
    was not vetted by the security team. The application seems to be working, but
    your team lead has designated you with ensuring the packet flows are correct
    and the policies implemented are not going to break the application. What tools
    could you use to complete your task?


    1. Check the policies manually and then use the VPC Flow Logs to
      verify the packet flows.

    2. Use the policy simulator and then use the VPC Flow Logs to
      verify the packet flows.


    3. Check the policies manually and then use a third-party tool to sniff
      the packets and verify the packet flows.

    4. Use the policy simulator and then use a third-party tool to sniff the
      packets and verify the packet flows.


  17. A new compliance policy has been introduced that will require you to set up an
    IDS/IPS system that can perform packet analysis in AWS. What tool would you
    recommend using to deliver this functionality?

    1. AWS WAF

    2. AWS IPS

    3. AWS Shield

    4. A third-party IDS/IPS product


      [ 244 ]

      Mock Tests Chapter 12


  18. There are several subnets in your VPC across several subnets. Your team is
    proposing using an EC2 instance that will be connected via an ENI to multiple
    subnets to perform compliance scanning in the local network. Are there any
    considerations with this setup?


    1. It will work, but make sure to choose an instance that allows for
      enough secondary ENIs to be attached.

    2. It will work, but only among subnets in the same availability zone.

    3. This will not work.

    4. It will work, but only among either private or public subnets.

  19. Your company is running a single on-premise site that is connected to a VPC.
    Your company just got acquired by a larger company. They would like you to
    connect your environment to their security VPC, but the VPC IP address ranges
    overlap. What could you do to allow for connecting to both your VPC and the
    security VPC from your on-site deployment?

    1. Use BGP with a different AS_PATH for each VPC.

    2. Use a proxy instance in the security VPC that is out of the scope of
      your own VPC to pass traffic to the security tools.

    3. Redeploy your VPC to a new subnet.

    4. Use VPC peering between the VPCs.

    5. Use VRF on your on-premise router.

  20. You have a VPC with a public and a private subnet. You deploy a NAT gateway
    and try to connect to the update services from the private instances. You are
    unable to connect to the update services. What could be the cause?


    1. The NAT gateway is still being created. It can take up to 30 minutes
      for the NAT gateway to become available.

    2. The NAT gateway has been created in the private subnet.

    3. The NAT gateway has been created in the public subnet.

    4. The NAT ENI is not connected to the private subnet.


      [ 245 ]

      Mock Tests Chapter 12


  21. You are looking at the ELB options. Your application requires extremely high
    performance at very high packet rates. Which would you choose?

    1. The ALB

    2. The Classic Load Balancer

    3. The NLB

    4. API Gateway

  22. You would like to create a DNS service for a domain that you have registered
    with a third-party provider. Which option would you choose?

    1. Create a Route 53 private zone.

    2. Create a Route 53 public zone.

    3. Create a custom DNS server in EC2 and host your zone.

    4. Use the third-party DNS, since third-party domains are not
      supported on AWS.


  23. When using a hosted virtual interface on a Direct Connect connection of another
    account, what does your account get charged for? (Choose all that apply.)

    1. Data transfer out of AWS

    2. Data transfer into AWS

    3. VIF uptime hours

    4. The Direct Connect link hours

    5. All of the above

  24. Your company is using a NAT gateway in one of your public subnets to deliver
    updates and outgoing internet access to 10 private subnets. Recently, the updates
    have started failing. You discover that the NAT gateway is at capacity. Which
    option would enable you to mitigate this automatically?

    1. Replace the NAT gateway with a very large NAT instance.

    2. Add the NAT gateway to a NAT gateway autoscaling group and
      autoscale above the maximum of one NAT instance.


      [ 246 ]

      Mock Tests Chapter 12


    3. Write a CloudFormation script that will deploy an additional NAT
      instance in another public subnet. Create a CloudWatch trigger that will
      look at the aggregate performance of your NAT instance, and deploy
      another NAT gateway in case the existing NAT gateway is at maximum
      performance. Ensure that new routing tables are created by the
      CloudFormation script. Create a custom resource in the
      CloudFormation stack to trigger another lambda that will
      proportionally replace the routing tables in the private subnets.

    4. There is no simple way to mitigate this automatically. A manual
      intervention will be required to scale a NAT instance.


  25. You have two VPCs that you are trying to peer together. VPC A has a network of
    10.0.10.0/24, and VPC B has a network of 10.0.20.0/24. You have created a peering
    connection with the ID pcx-f48a26. Which routes need to be created for the
    peering to be completed? (Select all that apply.)

    1. Add a route in VPC A for 10.0.10.0/24 on pcx-f48a26.

    2. Add a route in VPC B for 10.0.10.0/24 on pcx-f48a26.

    3. Add a route in VPC A for 10.0.20.0/24 on pcx-f48a26.

    4. Add a route in VPC B for 10.0.20.0/24 on pcx-f48a26.

    5. Add a route in VPC B for 10.0.10.0/24 on vgw-f48a26.

    6. Add a route in VPC A for 10.0.20.0/24 on vgw-f48a26.

  26. To deploy an instance with up to 25 GBps network throughput, what would you
    need to do?

    1. Select an instance type that supports enhanced networking.

    2. Select an instance type with 10 GBps support and add a second ENI.

    3. Enable Enhanced networking in the instance.

    4. Use a cluster placement group to deploy the instance.

    5. Select an instance type with 10 GBps support and add two
      additional ENIs.


      [ 247 ]

      Mock Tests Chapter 12


  27. You have a VPC with an internet gateway with the ID igw-117aa5f attached, and
    two public subnets created. Your route table has the following entries:

    10.0.0.0/16 - local

    0.0.0.0/0 - igw-117aa5f


    You have deployed an IPv6 network in the VPC and some instances with a dual
    stack. Since the IGW is attached to the VPC, you are expecting IPv6 unicast to kick
    in and make your instances available on the internet. However, the instances are
    not available on the internet. You try accessing them via IPv4, and they respond.
    What could be the problem?

    1. Dual stack is not supported in VPC.

    2. The IPv4 is always preferred. Remove the IPv4 addresses from the
      instances.

    3. You are missing the 0:0:0:0/0:0 entry in the route table.

    4. You are missing the ::/0 entry in the route table.


  28. The following setup has been deployed by an ex-employee:
    A virtual private gateway

    A Direct Connect connection to your on-premise environment
    A backup VPN between your on-premise site and AWS over the
    internet


    The setup was designed to be highly available and the private resources are
    available to the on-premise users. However, AWS keeps sending you emails
    warning you that your VPN connection is not redundantly connected. What
    could be the cause of this?


    1. The Direct Connect connection does not offer a backup to the VPN
      link.

    2. The VPN connection gives you two tunnels to connect to for high
      availability. If only one tunnel is connected, AWS will periodically send
      out a notification that your VPN link is not redundant.


      [ 248 ]

      Mock Tests Chapter 12


    3. The Direct Connect connection gives you two tunnels to connect to
      for high availability. If only one tunnel is connected, AWS will
      periodically send out a notification that your Direct Connect link is not
      redundant.

    4. The VPN connection does not offer a backup to the Direct Connect
      link.


  29. When transferring S3 data across a VPN connection, what kind of transfer costs
    would be incurred?

    1. Increased VPN data transfer costs out of S3.

    2. Standard data transfer costs out of S3.

    3. Lower VPN data transfer costs out of S3.

    4. No transfer costs are incurred over a VPN.

  30. When transferring S3 data across a Direct Connect connection, what kind of
    transfer costs would be incurred?

    1. Increased Direct Connect data transfer costs out of S3.

    2. Standard data transfer costs out of S3.

    3. Lower Direct Connect data transfer costs out of S3.

    4. No transfer costs are incurred over Direct Connect.

  31. You are running an application behind an ALB. You have whitelisted the ALB
    IPs in your corporate firewall, but a few days later the users complain the
    application is not available anymore. You resolve the ALB DNS name, but see
    that the IPs have changed. Your firewall only supports whitelisting IPs, so how
    can you ensure your ALB will be available from your corporate network?

    1. Assign an Elastic IP to the ALB.

    2. Assign two Elastic IPs to the ALB. This is required for redundancy.

    3. Implement an NLB in front of the ALB.

    4. Implement two NLBs in front of the ALB.


      [ 249 ]

      Mock Tests Chapter 12


  32. There is a requirement to send all traffic in all your applications through a
    security VPC that will scan all the packets before letting them pass out of each
    VPC, on-premise site connected via Direct Connect, and branch office connected
    via VPN. What combination of services would you consider using to ensure this
    can be done properly?

    1. VPC Peering, WAF, ALB, and NLB

    2. VPC Peering, VPN, Direct Connect, and WAF

    3. VPC Peering, VPN, Direct Connect, a third-party packet scanner, and
      promiscuous mode


    4. VPC Peering, VPN, Direct Connect, a third-party packet scanner, and
      routing

  33. There is a requirement for instances and services to programmatically exchange
    information on the network. Your applications use a mix and match of different
    technologies, and you would like to unify the way they communicate. What
    service would you choose to achieve this?

    1. The ELB

    2. The API Gateway

    3. The internet gateway

    4. The virtual private gateway

  34. You are working for an ERP vendor. The company has just deployed their SaaS
    ERP solution and are looking for a way to integrate their system with other
    clients' networks in a secure and AWS-compatible manner. Which option would
    you evaluate as a possible solution?


    1. Implement VPNs with all the clients. Exchange the traffic across the
      VPN.


    2. Package the ERP into an AMI and let the clients deploy the AMI in
      their VPC.

    3. Peer the VPCs with your clients and exchange information over the
      peered connection.

    4. Use AWS PrivateLink to connect to the clients' VPCs.


      [ 250 ]

      Mock Tests Chapter 12


  35. You are connecting to services via a public VIF through a Direct Connect link.
    The security team looks at the deployment and has some concerns about using
    the public VIF. What could they be concerned about?

    1. There is nothing to be concerned about.

    2. Due to the way BGP works, all public IPs are advertised on the public
      network. When connecting over the public VIF, your private IP is also
      advertised and reachable by any instances with a public IP.

    3. Due to the way BGP works, all private IPs are advertised on the
      private network. When connecting over the public VIF, your private IP
      is also advertised and reachable by any instances with a private IP.


    4. Due to the way BGP works, all public IPs are advertised on the
      public network. When connecting over the public VIF, your public IP is
      also advertised and reachable by any instances with a public IP.

  36. Your application is sitting in a pair of VPC private subnets. You use a NAT
    gateway to connect to the internet and transfer data mainly from S3 to the EC2
    instances. You need to be mindful regarding the costs of your application. Which
    costs would you need to consider in this setup? (Select all that apply.)

    1. Cost of EC2

    2. Cost of NAT gateway hours

    3. Cost of S3 storage

    4. VPC hours

    5. S3 transfer-out costs

    6. NAT data processing costs

    7. IGW hours


      [ 251 ]

      Mock Tests Chapter 12


  37. You have deployed an application in a VPC with the 10.0.0.0/24 network and a
    10.0.0.0/24 subnet. You seem to have made a mistake, and would like to increase
    the size of the VPC network to
    10.0.0.0/20. How can you do this?


    1. Stop all instances in the VPC. Put the VPC in maintenance and
      change the network CIDR.


    2. Terminate all instances in the VPC. Put the VPC in maintenance and
      change the network CIDR.

    3. Terminate all instances in the VPC. Remove the subnet and change
      the network CIDR.


    4. This is not possible. Create a new VPC and move the resources to the
      new VPC.

  38. You are implementing a change in the management system that will track any
    changes to security groups and network access control lists in AWS. Which
    service would you be able to use to detect any of these changes?

    1. CloudWatch

    2. CloudTrail

    3. CloudFormation

    4. CloudFront

  39. You are setting up an application in a VPC that will host a secure payment
    system. You have put security in place at all levels according to the requirements
    of the PIC DSS. You are worried about how to make sure that AWS is compliant
    with the PCI DSS standard before you roll out your application design. How
    could you get your hands on an AWS compliance report?

    1. Use AWS Artifact.

    2. Open a ticket on AWS support.

    3. Contact AWS by phone. Only phone support will be allowed to give
      you a compliance report.

    4. Compliance reports are internal and are not available to the public.


      [ 252 ]

      Mock Tests Chapter 12


  40. You are writing a CloudFormation template. You would like to use a third-party
    firewall appliance in your deployment. You have written a full stack for the VPC,
    subnets, and tested it. You find the solution on the marketplace and try to deploy
    it to EC2. Both of these work. How could you further automate the deployment?


    1. Reference the third-party AMI in the template and deploy once the
      VPC and subnets are deployed.

    2. You cannot use marketplace products in CloudFormation.

    3. Reference the marketplace ID in the template and deploy once the
      VPC and subnets are deployed.

    4. Use a custom resource that calls a Lambda function that deploys the
      solution from the marketplace.


  41. The company you work for has a Direct Connect link to S3 through a public VIF.
    Some users in the company are trying to spin up instances in a private network
    and are trying to connect to them via their private IPs. They are getting a
    connection timed-out error. What would be the most likely cause?

    1. The instance security group does not allow access to the instances.

    2. The NACL does not allow access to the instances.

    3. The public VIF does not allow access to the instances.

    4. The instance operating system firewall does not allow access to the
      instances.

  42. You need to implement a DevOps approach to deploying your networks. The
    core requirement is that you use an Infrastructure as Code approach. Which of
    the following services would you use to deploy the networks in a DevOps-
    compatible way? (Select all that apply.)

    1. CodeDeploy

    2. OpsWorks Stacks

    3. AWS DevOps Stacks

    4. CloudFormation


      [ 253 ]

      Mock Tests Chapter 12


  43. An EC2 instance cannot be pinged on the Elastic IP attached to it. You are able to
    log in but not ping. You attach the Elastic IP to another instance and you are able
    to ping it. What would be the most likely cause of this behavior?

    1. The second EC2 instance has enhanced networking enabled.

    2. The second EC2 instance has ICMP allowed in the security group.

    3. The first EC2 instance has ICMP denied in the NACL.

    4. The first EC2 instance has enhanced networking enabled.

  44. When deploying a CloudFront distribution, you need to terminate both incoming
    and outgoing connections. Which option would you choose in the distribution
    settings to allow this?

    1. GET, HEAD, and option methods.

    2. GET, HEAD, OPTION, POST, PUT, UPDATE, and DELETE options.

    3. Read/write distribution.

    4. SSL offloading.

  45. You are deploying a static website on S3. You are using AWS services to deliver
    the
    mywebsite.com domain. You want the site to be as secure and as fast as
    possible. What steps would you need to take to deploy the services? (Select all
    that apply.)

    1. Register the mywebsite.com domain in Route 53 and create a public
      zone.

    2. Create a public bucket policy.

    3. Create an origin access identity bucket policy.

    4. Deploy a CloudFront distribution with an origin access policy.

    5. Create a CNAME record for the distribution in Route 53.

    6. Create an ALIAS record for the distribution in Route 53.

    7. Create an ALIAS record for the S3 bucket in Route 53.

    8. Create a CNAME record for the S3 bucket in Route 53.


      [ 254 ]

      Mock Tests Chapter 12


  46. You have an OpenID central authentication VPC that is peered with your
    application VPC. You try and authenticate but get a
    connection timed

    out response from the OpenID resource. What would you need to do to fix this
    issue?

    1. Switch the VPC peering to a VPN.

    2. Deploy an ENI on the OpenID server in a public subnet of the central
      VPC.


    3. Configure the security group of the OpenID application with the IP
      ranges of your VPC subnets to allow access.

    4. Configure the security group of the OpenID application with the
      subnet IDs of your VPC subnets to allow access.


  47. You have been called by your security response team because your EC2
    application is under a DoS attack. What can you do to quickly block the IP
    addresses that are sending the DoS traffic?


    1. Use VPC Flow Logs to discover the offending IPs and deploy a WAF
      to implement the IP-based rules.

    2. Use VPC Flow Logs to discover the offending IPs and deploy NACL-
      based blocking rules.

    3. Use VPC Flow Logs to discover the offending IPs and deploy Shield
      Advanced to implement the IP-based rules.


    4. Use VPC Flow Logs to discover the offending IPs and deploy a third-
      party marketplace solution to implement the IP-based rules.

  48. You are deploying a cluster of instances behind an ELB that uses a custom UDP-
    based communication. Which option would be appropriate for this deployment?

    1. The ALB

    2. Classic Load Balancer

    3. Network Load Balancer

    4. None of the above


      [ 255 ]

      Mock Tests Chapter 12


  49. You are deploying a VPC with the 10.0.0.0/28 CIDR. How many subnets can you
    deploy into this VPC?

    1. 1

    2. 0

    3. 2

    4. 28

  50. What is the most important consideration when connecting VPCs via VPC
    peering?

    1. The account that the VPCs belong to

    2. The IP address ranges

    3. The regions that the VPCs are in

    4. The ASN of the VPC peer

  51. When creating a VPC, which of the following is NOT true? (Select all that apply.)


    1. You can create a primary IPv4 network address range to a VPC.

    2. You can create a primary IPv6 network address range to a VPC.

    3. By default, you can create four additional IPv4 network address
      ranges to a VPC.

    4. You can create one secondary IPv6 network address range to a VPC.

    5. By default, you can create four additional IPv6 network address
      ranges to a VPC.

    6. By default, you can create as many subnets as a CIDR notation will
      allow in a VPC network.


      [ 256 ]

      Mock Tests Chapter 12


  52. You are creating a network CloudFormation template. Which CloudFormation
    type would you use to assign an IPv6 address to?

    1. AWS::EC2::NetworkInterface

    2. AWS::EC2::IPv6Address

    3. AWS::VPC::ENI

    4. AWS::VPC::IPv6Address

  53. In CloudFormation, what does the following snippet of code do?


    "MyRule" : {

    "Type" : "AWS::EC2::NetworkAcl",
    "Properties" : {

    "VpcId" : { "Ref" : "MyVPC" },

    }

    },


    1. The snippet attaches a security group rule to a VPC.

    2. The snippet attaches a NACL rule to a VPC.

    3. The snippet creates a NACL within a VPC.

    4. The snippet creates a security group in a VPC.


  54. You are creating a subnet in a VPC. To make the subnet highly available, which
    of the following would you need to do?

    1. Spread the subnet across at least two availability zones.

    2. A subnet cannot be highly available.

    3. Spread the VPC across at least two availability zones.

    4. Create a secondary subnet IP range that can exist in another
      availability zone and attach it to the subnet.


      [ 257 ]

      Mock Tests Chapter 12


  55. When deploying a VPN service in AWS, you need to consider all levels of high
    availability of your application. One of the factors proposed by your routing
    team is to make sure you have two VPN devices on two distinct ASN backends.
    You decide this is doable in AWS and start building the solution. Which option
    would help you achieve this?


    1. Deploy two AWS VPNs via a VGW. The ASN is assigned randomly
      so you will likely get two different ASNs assigned.


    2. Deploy two custom VPN devices on EC2. Choose two Elastic IPs that
      map to two different ASNs.


    3. Deploy two AWS VPNs via a VGW. Select a different ASN for each
      VGW.

    4. Deploy two custom VPN devices on EC2. Select a different ASN for
      each instance public IP.


  56. You need to ensure encryption in transit is configured correctly on your AWS
    deployments. You only use managed services to connect to AWS. Which
    encryption technology is available across all services for data in transit protection
    on AWS?

    1. ESP

    2. GRE

    3. TLS

    4. DH

  57. When deploying a Direct Connect link, your company has found that some of the
    devices in the on-premise location are not compatible with the BGP routing
    protocol. What is the alternative approach when using Direct Connect to using
    BGP? (Select all that apply)

    1. Static routing

    2. BGP with encapsulated static routing

    3. RIPv2

    4. BGP with encapsulated RIPv2

    5. None of the above


      [ 258 ]

      Mock Tests Chapter 12


  58. You have created a highly available, public/private VPC with the VPC wizard.
    Which of the following objects will NOT get created?

    1. Two public subnets in two availability zones

    2. Two private subnets in two availability zones

    3. Two internet gateways

    4. A NAT instance with an Elastic IP

    5. Two routing tables

  59. Which on-site connection type would you recommend to a bioengineering
    startup that would like to get the following:

    1 GBps throughput

    Reliable link
    Use of BGP


    1. Direct Connect

    2. AWS VPN with VGW

    3. A custom third-party VPN

    4. VPC peering


  60. Your company has a legacy corporate website on EC2 instances. The website has
    had some issues in the past, and the users visiting the site have seen some 503
    error responses on the browser. The business has decided to invest in a redesign
    that will take three months. In the meantime, they would like you to improve the
    experience of the users visiting the site. As a network engineer, what simple and
    cheap workaround could you recommend?

    1. Put the EC2 servers behind a load balancer.

    2. Deploy a static copy of the site to S3 and use Route 53 health checks
      to fail over to S3 in case the site is down.


    3. Clone the EC2 instances into another deployment, and use Route 53
      health checks to fail over to S3 in case the site is down.

    4. Use autoscaling to increase the performance of the EC2 instances.


[ 259 ]

xxx Introduction


Assessment Test

  1. You have been hired as a solution architect for a large media conglomerate that wants a
    cost-effective way to store a large collection of recorded interviews with the guests collected
    as MP4 files and a data warehouse system to capture the data across the enterprise and pro-
    vide access via BI tools. Which of the following is the most cost-effective solution for this
    requirement?

    1. Store large media files in Amazon Redshift and metadata in Amazon DynamoDB. Use
      Amazon DynamoDB and Redshift to provide decision-making with BI tools.

    2. Store large media files in Amazon S3 and metadata in Amazon Redshift. Use Amazon
      Redshift to provide decision-making with BI tools.

    3. Store large media files in Amazon S3, and store media metadata in Amazon EMR. Use
      Spark on EMR to provide decision-making with BI tools.

    4. Store media files in Amazon S3, and store media metadata in Amazon DynamoDB.
      Use DynamoDB to provide decision-making with BI tools.

  2. Which of the following is a distributed data processing option on Apache Hadoop and was
    the main processing engine until Hadoop 2.0?

    1. MapReduce

    2. YARN

    3. Hive

    4. ZooKeeper

  3. You are working as an enterprise architect for a large fashion retailer based out of Madrid,
    Spain. The team is looking to build ETL and has large datasets that need to be transformed.
    Data is arriving from a number of sources and hence deduplication is also an important
    factor. Which of the following is the simplest way to process data on AWS?

    1. Load data into Amazon Redshift, and build transformations using SQL. Build custom
      deduplication script.

    2. Using AWS Glue to transform the data using built-in FindMatches ML Transform.

    3. Load data into Amazon EMR, build Spark SQL scripts, and use custom deduplication
      script.

    4. Use Amazon Athena for transformation and deduplication.

  4. Which of these statements are true about AWS Glue crawlers? (Choose three.)

    1. AWS Glue crawlers provide built-in classifiers that can be used to classify any type of
      data.

    2. AWS Glue crawlers can connect to Amazon S3, Amazon RDS, Amazon Redshift,
      Amazon DynamoDB, and any JDBC sources.

    3. AWS Glue crawlers provide custom classifiers, which provide the option to classify
      data that cannot be classified by built-in classifiers.

    4. AWS Glue crawlers write metadata to AWS Glue Data Catalog.

      Introduction xxxi


  5. You are working as an enterprise architect for a large player within the entertainment
    industry that has grown organically and by acquisition of other media players. The team
    is looking to build a central catalog of information that is spread across multiple data-
    bases (all of which have a JDBC interface), Amazon S3, Amazon Redshift, Amazon RDS,
    and Amazon DynamoDB tables. Which of the following is the most cost-effective way to
    achieve this on AWS?

    1. Build scripts to extract the metadata from the different databases using native APIs
      and load them into Amazon Redshift. Build appropriate indexes and UI to support
      searching.

    2. Build scripts to extract the metadata from the different databases using native APIs
      and load them into Amazon DynamoDB. Build appropriate indexes and UI to support
      searching.

    3. Build scripts to extract the metadata from the different databases using native APIs
      and load them into an RDS database. Build appropriate indexes and UI to support
      searching.

    4. Use AWS crawlers to crawl the data sources to build a central catalog. Use AWS Glue
      UI to support metadata searching.

  6. You are working as a data architect for a large financial institution that has built its data
    platform on AWS. It is looking to implement fraud detection by identifying duplicate cus-
    tomer accounts and looking at when a newly created account matches one for a previously
    fraudulent user. The company wants to achieve this quickly and is looking to reduce the
    amount of custom code that might be needed to build this. Which of the following is the
    most cost-effective way to achieve this on AWS?

    1. Build a custom deduplication script using Spark on Amazon EMR. Use PySpark to
      compare dataframes representing the new customers and fraudulent customers to iden-
      tify matches.

    2. Load the data to Amazon Redshift and use SQL to build deduplication.

    3. Load the data to Amazon S3, which forms the basis of your data lake. Use Amazon
      Athena to build a deduplication script.

    4. Load data to Amazon S3. Use AWS Glue FindMatches Transform to implement this.

  7. Where is the metadata definition store in the AWS Glue service?

    1. Table

    2. Configuration files

    3. Schema

    4. Items

  8. AWS Glue provides an interface to Amazon SageMaker notebooks and Apache Zep-
    pelin notebook servers. You can also open a SageMaker notebook from the AWS Glue
    console directly.

    1. True

    2. False

      xxxii Introduction


  9. AWS Glue provides support for which of the following languages? (Choose two.)

    1. SQL

    2. Java

    3. Scala

    4. Python

  10. You work for a large ad-tech company that has a set of predefined ads displayed routinely.
    Due to the popularity of your products, your website is getting popular, garnering attention
    of a diverse set of visitors. You are currently placing dynamic ads based on user click data,
    but you have discovered the process time is not keeping up to display the new ads since a
    users’ stay on the website is short lived (a few seconds) compared to your turnaround time
    for delivering a new ad (less than a minute). You have been asked to evaluate AWS platform
    services for a possible solution to analyze the problem and reduce overall ad serving time.
    What is your recommendation?

    1. Push the clickstream data to an Amazon SQS queue. Have your application subscribe
      to the SQS queue and write data to an Amazon RDS instance. Perform analysis using
      SQL.

    2. Move the website to be hosted in AWS and use AWS Kinesis to dynamically process the
      user clickstream in real time.

    3. Push web clicks to Amazon Kinesis Firehose and analyze with Kinesis Analytics or
      Kinesis Client Library.

    4. Push web clicks to Amazon Kinesis Stream and analyze with Kinesis Analytics or
      Kinesis Client Library (KCL).

  11. You work for a new startup that is building satellite navigation systems competing with the
    likes of Garmin, TomTom, Google Maps, and Waze. The company’s key selling point is its
    ability to personalize the travel experience based on your profile and use your data to get
    you discounted rates at various merchants. Its application is having huge success and the
    company now needs to load some of the streaming data from other applications onto AWS
    in addition to providing a secure and private connection from its on-premises data centers
    to AWS. Which of the following options will satisfy the requirement? (Choose two.)

    1. AWS IOT Core

    2. AWS IOT Device Management

    3. Amazon Kinesis

    4. Direct Connect

  12. You work for a toy manufacturer whose assembly line contains GPS devices that track the
    movement of the toys on the conveyer belt and identify the real-time production status.
    Which of the following tools will you use on the AWS platform to ingest this data?

    1. Amazon Redshift

    2. Amazon Pinpoint

    3. Amazon Kinesis

    4. Amazon SQS

      Introduction xxxiii


  13. Which of the following refers to performing a single action on multiple items instead of
    repeatedly performing the action on each individual item in a Kinesis stream?

    1. Batching

    2. Collection

    3. Aggregation

    4. Compression

  14. What is the term given to a sequence of data records in a stream in AWS Kinesis?

    1. Batch

    2. Group Stream

    3. Consumer

    4. Shard

  15. You are working for a large telecom provider who has chosen the AWS platform for its data
    and analytics needs. It has agreed to using a data lake and S3 as the platform of choice for
    the data lake. The company is getting data generated from DPI (deep packet inspection)
    probes in near real time and looking to ingest it into S3 in batches of 100 MB or 2 minutes,
    whichever comes first. Which of the following is an ideal choice for the use case without
    any additional custom implementation?

    1. Amazon Kinesis Data Analytics

    2. Amazon Kinesis Data Firehose

    3. Amazon Kinesis Data Streams

    4. Amazon Redshift

  16. You are working for a car manufacturer that is using Apache Kafka for its streaming needs.
    Its core challenges are scalability and manageability a current of Kafka infrastructure–
    hosted premise along with the escalating cost of human resources required to manage the
    application. The company is looking to migrate its analytics platform to AWS. Which of the
    following is an ideal choice on the AWS platform for this migration?

    1. Amazon Kinesis Data Streams

    2. Apache Kafka on EC2 instances

    3. Amazon Managed Streaming for Kafka

    4. Apache Flink on EC2 instances

  17. You are working for a large semiconductor manufacturer based out of Taiwan that is using
    Apache Kafka for its streaming needs. It is looking to migrate its analytics platform to AWS
    and Amazon Managed Streaming for Kafka and needs your help to right-size the cluster.
    Which of the following will be the best way to size your Kafka cluster? (Choose two.)

    1. Lift and shift your on-premises cluster.

    2. Use your on-premises cluster as a guideline.

    3. Perform a deep analysis of usage, patterns, and workloads before coming up with a
      recommendation.

    4. Use the MSK calculator for pricing and sizing.

      xxxiv Introduction


  18. You are running an MSK cluster that is running out of disk space. What can you do to miti-
    gate the issue and avoid running out of space in the future? (Choose four.)

    1. Create a CloudWatch alarm that watches the KafkaDataLogsDiskUsed metric.

    2. Create a CloudWatch alarm that watches the KafkaDiskUsed metric.

    3. Reduce message retention period.

    4. Delete unused shards.

    5. Delete unused topics.

    6. Increase broker storage.

  19. Which of the following services can act as sources for Amazon Kinesis Data Firehose?

    1. Amazon Managed Streaming for Kafka

    2. Amazon Kinesis Data Streams

    3. AWS Lambda

    4. AWS IOT

  20. How does a Kinesis Data Streams distribute data to different shards?

    1. ShardId

    2. Row hash key

    3. Record sequence number

    4. Partition key

  21. How can you write data to a Kinesis Data Stream? (Choose three.)

    1. Kinesis Producer Library

    2. Kinesis Agent

    3. Kinesis SDK

    4. Kinesis Consumer Library

  22. You are working for an upcoming e-commerce retailer that has seen its sales quadruple dur-
    ing the pandemic. It is looking to understand more about the customer purchase behavior
    on its website and believes that analyzing clickstream data might provide insight into the
    customers’ time spent on the website. The clickstream data is being ingested in a streaming
    fashion with Kinesis Data Streams. The analysts are looking to rely on their advance SQL
    skills, while the management is looking for a serverless model to reduce their TCO rather
    than upfront investment. What is the best solution?

    1. Spark streaming on Amazon EMR

    2. Amazon Redshift

    3. AWS Lambda with Kinesis Data Streams

    4. Kinesis Data Analytics

  23. Which of the following writes data to a Kinesis stream?

    1. Consumers

    2. Producers

      Introduction xxxv


    3. Amazon MSK

    4. Shards

  24. Which of the following statements are true about KPL (Kinesis Producer Library)?
    (Choose three.)

    1. Writes to one or more Kinesis Data Streams with an automatic and configurable retry
      mechanism.

    2. Aggregates user records to increase payload size.

    3. Submits CloudWatch metrics on your behalf to provide visibility into producer
      performance.

    4. Forces the caller application to block and wait for a confirmation.

    5. KPL does not incur any processing delay and hence is useful for all applications writing
      data to a Kinesis stream.

    6. RecordMaxBufferedTime within the library is set to 1 millisecond and not changeable.

  25. Which of the following is true about Kinesis Client Library? (Choose three.)

    1. KCL is a Java library and does not support other languages.

    2. KCL connects to the data stream and enumerates the shards within the data stream.

    3. KCL pulls data records from the data stream.

    4. KCL does not provide a checkpointing mechanism.

    5. KCL instantiates a record processor for each stream.

    6. KCL pushes the records to the corresponding record processor.

  26. Which of the following metrics are sent by the Amazon Kinesis Data Streams agent to
    Amazon CloudWatch? (Choose three.)

    1. MBs Sent

    2. RecordSendAttempts

    3. RecordSendErrors

    4. RecordSendFailures

    5. ServiceErrors

    6. ServiceFailures

  27. You are working as a data engineer for a gaming startup, and the operations team notified
    you that they are receiving a ReadProvisionedThroughputExceeded error. They are asking
    you to help out and identify the reason for the issue and help in the resolution. Which of the
    following statements will help? (Choose two.)

    1. The GetRecords calls are being throttled by KinesisDataStreams over a duration of
      time.

    2. The GetShardIterator is unable to get a new shard over a duration of time.

    3. Reshard your stream to increase the number of shards.

    4. Redesign your stream to increase the time between checks for the provision throughput
      to avoid the errors.

      xxxvi Introduction


  28. You are working as a data engineer for a microblogging website that is using Kinesis
    for streaming weblogs data. The operations team notified that they are experiencing an
    increase in latency when fetching records from the stream. They are asking you to help

    out and identify the reason for the issue and help in the resolution. Which of the following
    statements will help? (Choose three.)

    1. There is an increase in record count resulting in an increase in latency.

    2. There is an increase in the size of the record for each GET request.

    3. There is an increase in the shard iterator’s latency resulting in an increase in record
      fetch latency.

    4. Increase the number of shards in your stream.

    5. Decrease the stream retention period to catch up with the data backlog.

    6. Move the processing to MSK to reduce latency.

  29. Which of the following is true about rate limiting features on Amazon Kinesis?
    (Choose two.)

    1. Rate limiting is not possible within Amazon Kinesis and you need MSK to implement
      rate limiting.

    2. Rate limiting is only possible through Kinesis Producer Library.

    3. Rate limiting is implemented using tokens and buckets within Amazon Kinesis.

    4. Rate limiting uses standard counter implementation.

    5. Rate limiting threshold is set to 50 percent and is not configurable.

  30. What is the default data retention period for a Kinesis stream?

    1. 12 hours

    2. 168 hours

    3. 30 days

    4. 365 days

  31. Which of the following options help improve efficiency with Kinesis Producer Library?
    (Choose two.)

    1. Aggregation

    2. Collection

    3. Increasing number of shards

    4. Reducing overall encryption

  32. Which of the following services are valid destinations for Amazon Kinesis Firehose?
    (Choose three.)

    1. Amazon S3

    2. Amazon SageMaker

    3. Amazon Elasticsearch

    4. Amazon Redshift

      Introduction xxxvii


    5. Amazon QuickSight

    6. AWS Glue

  33. Which of the following is a valid mechanism to do data transformations from Amazon
    Kinesis Firehose?

    1. AWS Glue

    2. Amazon SageMaker

    3. Amazon Elasticsearch

    4. AWS Lambda

  34. Which of the following is a valid mechanism to perform record conversions from Amazon
    Kinesis Firehose AWS Console? (Choose two.)

    1. Apache Parquet

    2. Apache ORC

    3. Apache Avro

    4. Apache Pig

  35. You are working as a data engineer for a mid-sized boating company that is capturing data
    in real time for all of its boats connected via a 3G/4G connection. The boats typically sail
    in areas with good connectivity, and data loss from the IoT devices on the boat to a Kinesis
    stream is not possible. You are monitoring the data arriving from the stream and have real-
    ized that some of the records are being missed. What can be the underlying issue for records
    being skipped?

    1. The connectivity from the boat to AWS is the reason for missed records.

    2. processRecords() is throwing exceptions that are not being handled and hence the
      missed records.

    3. The shard is already full and hence the data is being missed.

    4. The record length is more than expected.

    5. Apache Pig

  36. How does Kinesis Data Firehose handle server-side encryption? (Choose three.)

    1. Kinesis Data Firehose does not support server-side encryption.

    2. Kinesis Data Firehose server-side encryption depends on the data source.

    3. Kinesis Data Firehose does not store the unencrypted data at rest when the data source
      is a Kinesis Data stream encrypted by AWS KMS.

    4. Kinesis Data Firehose stores the unencrypted data to S3 when the data source is a
      Kinesis Data stream encrypted by AWS KMS.

    5. When data is delivered using Direct PUT, you can start encryption by using
      StartDeliveryStreamEncryption.

    6. When data is delivered using Direct PUT, you can start encryption by using
      StartKinesisFirhoseEncryption.

      xxxviii Introduction


  37. How can you start an AWS Glue job automatically after the completion of a crawler?
    (Choose two.)

    1. Use AWS Glue triggers to start a job when the crawler run completes.

    2. Create an AWS Lambda function using Amazon CloudWatch events rule.

    3. Use AWS Glue workflows.

    4. This is not possible. You have to run it manually.

  38. You are working as a consultant for an advertising agency that has hired a number of data
    scientists who are working to improve the online and offline campaigns for the company
    and using AWS Glue for most of their data engineering workloads. The data scientists
    have broad experience with adtech workloads and before joining the team have developed
    Python libraries that they would like to use in AWS Glue. How can they use the external
    Python libraries in an AWS Glue job? (Choose two.)

    1. Package the libraries in a .tar file, and upload to Amazon S3.

    2. Package the libraries in a .zip file, and upload to Amazon S3.

    3. Use the library in a job or job run.

    4. Unzip the compressed file programmatically before using the library in the job or job
      run.

  39. You are working as a consultant for a large conglomerate that has recently acquired another
    company. It is looking to integrate the applications using a messaging system and it would
    like the applications to remain decoupled but still be able to send messages. Which of the
    following is the most cost-effective and scalable service to achieve the objective?

    1. Apache Flink on Amazon EMR

    2. Amazon Kinesis

    3. Amazon SQS

    4. AWS Glue streaming.

  40. What types of queues does Amazon SQS support? (Choose two.)

    1. Standard queue

    2. FIFO queue

    3. LIFO queue

    4. Advanced queue

  41. You are working as a data engineer for a telecommunications operator that is using
    DynamoDB for its operational data store. The company is looking to use AWS Data
    Pipeline for workflow orchestration and needs to send some SNS notifications as soon as
    an order is placed and a record is available in the DynamoDB table. What is the best way to
    handle this?

    1. Configure a lambda function to keep scanning the DynamoDB table. Send an SNS
      notification once you see a new record.

    2. Configure Amazon DynamoDB streams to orchestrate AWS Data Pipeline kickoff.

      Introduction xxxix


    3. Configure an AWS Glue job that reads the DynamoDB table to trigger an AWS Data
      Pipeline job.

    4. Use the preconditions available in AWS Data Pipeline.

  42. You have been consulting on the AWS analytics platform for some years now. One of your
    top customers has reached out to you to understand the best way to export data from

    its DynamoDB table to its data lake on S3. The customer is looking to keep the cost to a
    minimum and ideally not involve a consulting expertise at this moment. What is the easiest
    way to handle this?

    1. Export the data from Amazon DynamoDB to Amazon S3 using EMR custom scripts.

    2. Build a custom lambda function that scans the data from DynamoDB and writes it to
      S3.

    3. Use AWS Glue to read the DynamoDB table and use AWS Glue script generation to
      generate the script for you.

    4. Use AWS Data Pipeline to copy data from DynamoDB to Amazon S3.

  43. You have built your organization’s data lake on Amazon S3. You are looking to capture and
    track all requests made to an Amazon S3 bucket. What is the simplest way to enable this?

    1. Use Amazon Macie.

    2. Use Amazon CloudWatch.

    3. Use AWS CloudTrail.

    4. Use Amazon S3 server access logging.

  44. Your customer has recently received multiple 503 Slow Down errors during the Black
    Friday sale while ingesting data to Amazon S3. What could be the reason for this error?

    1. Amazon S3 is unable scale to the needs of your data ingestion patterns.

    2. This is an application-specific error originating from your web application and has
      nothing to do with Amazon S3.

    3. You are writing lots of objects per prefix. Amazon S3 is scaling in the background to
      handle the spike in traffic.

    4. You are writing large objects resulting in this error from Amazon S3.

  45. Which of the following is a fully managed NoSQL service?

    1. Amazon Redshift

    2. Amazon Elasticsearch

    3. Amazon DynamoDB

    4. Amazon DocumentDB

  46. Your customer is using Amazon DynamoDB for the operational use cases. One of its engi-
    neers has accidently deleted 10 records. Which of the following is a valid statement when it
    comes to recovering Amazon DynamoDB data?

    1. Use backups from Amazon S3 to re-create the tables.

    2. Use backups from Amazon Redshift to re-create the tables.

      xl Introduction


    3. Use data from a different region.

    4. Use Amazon DynamoDB PITR to recover the deleted data.

  47. Which of the following scenarios suit a provisioned scaling mode for DynamoDB?
    (Choose two.)

    1. You have predictable application traffic.

    2. You are running applications whose traffic is consistent or ramps up gradually.

    3. You are unable to forecast your capacity requirements.

    4. You prefer a pay-as-you-go pricing model.

  48. Which of the following statements are true about primary keys in DynamoDB?
    (Choose two.)

    1. A table’s primary key can be defined after the table creation.

    2. DynamoDB supports two types of primary keys only.

    3. A composite primary key is the same as a combination of partition key and sort key.

    4. DynamoDB uses a sort key as an input to internal hash function, the output of which
      determines the partition where the item is stored.

  49. You are working as a data engineer for a large corporation that is using DynamoDB to
    power its low-latency application requests. The application is based on a customer orders
    table that is used to provide information about customer orders based on a specific cus-
    tomer ID. A new requirement had recently arisen to identify customers based on a specific
    product ID. You decided to implement it as a secondary index. The application engineering
    team members have recently complained about the performance they are getting from the
    secondary index. Which of the following is a the most common reason for the performance
    degradation of a secondary index in DynamoDB?

    1. The application engineering team is querying data for project attributes.

    2. The application engineering team is querying data not projected in the secondary
      index.

    3. The application engineering team is querying a partition key that is not part of the
      local secondary index.

    4. The application engineering team is querying data for a different sort key value.

  50. Your customer is looking to reduce the spend of its on-premises storage ensuring the
    low latency of the application, which depends on a subset of the entire dataset. The cus-

    tomer is happy with the characteristics of Amazon S3. Which of the following would you
    recommend?

    1. Cached volumes

    2. Stored volumes

    3. File gateway

    4. Tape gateway

      Introduction xli


  51. Your customer is looking to reduce the spend of its on-premises storage ensuring the low
    latency of the application, which depends on the entire dataset. The customer is happy with
    the characteristics of Amazon S3. Which of the following would you recommend?

    1. Cached volumes

    2. Stored volumes

    3. File gateway

    4. Tape gateway

  52. You are working as a consultant for a telecommunications company. The data scientists
    have requested direct access to the data to dive deep into the structure of the data and build
    models. They have good knowledge of SQL. Which of the following tools will you choose
    to provide them with direct access to the data and reduce the infrastructure and mainte-
    nance overhead while ensuring that access to data on Amazon S3 can be provided?

    1. Amazon S3 Select

    2. Amazon Athena

    3. Amazon Redshift

    4. Apache Presto on Amazon EMR

  53. Which of the following file formats are supported by Amazon Athena? (Choose three.)

    1. Apache Parquet

    2. CSV

    3. DAT

    4. Apache ORC

    5. Apache AVRO

    6. TIFF

  54. Your EMR cluster is facing performance issues. You are looking to investigate the errors
    and understand the potential performance problems on the nodes. Which of the following
    nodes can you skip during your test?

    1. Master node

    2. Core node

    3. Task node

    4. Leader node

  55. Which of the following statements are true about Redshift leader nodes? (Choose three.)

    1. Redshift clusters can have a single leader node.

    2. Redshift clusters can have more than one leader node.

    3. Redshift Leader nodes should have more memory than the compute nodes.

    4. Redshift Leader nodes have the exact same specifications as the compute nodes.

    5. You can choose your own Leader node sizing, and it is priced separately.

    6. Redshift leader node is chosen automatically and is free to the users.

xlii Introduction


Answers to the Assessment Test

  1. B. Option A is incorrect as storing large media files in Redshift is a bad choice due to poten-
    tial cost. Option C is incorrect as EMR is not a cost-effective choice to store image meta-
    data. Option D is incorrect as DynamoDB is not cost-effective for large scan operations.
    Option B is correct as Amazon S3 is the right choice for large media files. Amazon Redshift
    is a good option for a managed data warehousing service.

  2. A. Option A is correct because MapReduce was the default processing engine on Hadoop
    until Hadoop 2.0 arrived. Option B is incorrect because YARN is a resource manager for
    applications on Hadoop. Option C is incorrect as Hive is a SQL layer that makes use of
    processing engines like MapReduce, Spark, Tez, and so on. Option D is incorrect as Zoo-
    Keeper is a distributed configuration and synchronization service that acts as a naming reg-
    istry for large distributed systems.

  3. B. AWS Glue is the simplest way to achieve data transformation using mostly a point-and-
    click interface and making use of built-in deduplication using FindMatches ML Transform.

  4. B, C, D. Option A is incorrect because AWS Glue built-in classifiers cannot be used for any
    data type. B, C, and D are correct; refer to
    docs.aws.amazon.com/glue/latest/dg/
    populate-data-catalog.html
    .

  5. D. AWS Glue is the most cost-effective way to achieve this since it provides a native catalog
    curation using crawlers and visibility across different AWS users.

  6. D. AWS Glue is the most cost-effective way to achieve this since it provides native
    built-in FindMatches capability using
    docs.aws.amazon.com/glue/latest/dg/
    machine-learning.html
    .

  7. A. AWS Glue stores metadata for your data that can be an S3 file, Amazon RDS table,
    a Redshift table, or any other supported source. A table in the AWS Glue Data Catalog
    consists of the names of columns, data type definitions, and other metadata about a
    base dataset.

  8. A. More information is available at docs.aws.amazon.com/glue/latest/dg/
    notebooks-with-glue.html
    .

  9. C, D. AWS Glue supports the Scala and Python languages.

  10. D. Option A does not provide any support for analyzing data in real time. Option B is
    incorrect and vague. Option C involves Kinesis Firehose, which helps in aggregating the
    data rather than real-time data analysis. Option D is correct as it involves Kinesis Ana-
    lytics and KCL.

  11. C, D. Options A and B are incorrect because the question is asking about connecting your
    on-premises applications and loading streaming data rather than IoT connectivity. Option
    C is correct as Amazon Kinesis is the streaming option available on AWS. Option D is
    correct as Direct Connect allows connectivity from your on-premises data center to AWS.

    Introduction xliii


  12. C. Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data
    so you can get timely insights and react quickly to new information. Amazon Kinesis offers
    key capabilities to cost-effectively process streaming data at any scale, along with the flexi-
    bility to choose the tools that best suit the requirements of your application. With Amazon
    Kinesis, you can ingest real-time data such as video, audio, application logs, website click-
    streams, and IoT telemetry data for machine learning, analytics, and other applications.
    Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly
    instead of having to wait until all your data is collected before the processing can begin.

  13. A. Batching refers to performing a single action on multiple items instead of repeatedly
    performing the action on each individual item.

  14. D. A Kinesis data stream is a set of shards. Each shard has a sequence of data records. Each
    data record has a sequence number that is assigned by Kinesis Data Streams.

  15. B. Amazon Kinesis Firehose can deliver streaming data to S3 and hence is an ideal choice
    for this. While you can achieve the same with other technologies, it does involve additional
    work and custom implementations.

  16. C. Amazon Managed Streaming for Kafka is an ideal solution for migrating from on-prem-
    ises Kafka installations to the AWS platform, which ensures that the code base remains the
    same. You spend lot less time managing the infrastructure, and being a managed service
    ensures that resources are allocated to other applications rather than leaving you with the
    cumbersome task of managing Kafka.

  17. B, D. As a best practice, use an on-premises cluster as a guideline for your cluster configura-
    tion and the MSK calculator for pricing and sizing.

  18. A, C, D, E. Please refer to the following documentation link: docs.aws.amazon.com/
    msk/latest/developerguide/

    bestpractices.html#bestpractices-monitor-disk-space.

  19. B. Amazon Kinesis Data Streams is the only service that can put data directly to Amazon
    Kinesis Firehose.

  20. D. A partition key is used to group data by shard within a stream. Kinesis Data Streams
    segregates the data records belonging to a stream into multiple shards. It uses the partition
    key that is associated with each data record to determine which shard a given data record
    belongs to.

  21. A, B, C. A producer is an application that writes data to Amazon Kinesis Data Streams.
    You can build producers for Kinesis Data Streams using the AWS SDK for Java and the
    Kinesis Producer Library.

  22. D. The requirements in the question are streaming data, SQL skills, and pay-as-you-go pric-
    ing. All of these requirements are met with KDA.

  23. B. Producers write data to a Kinesis stream.

    xliv Introduction


  24. A, B, C. Please refer to KPL documentation. Option D is incorrect because of its asynchro-
    nous architecture. Because the KPL may buffer records before sending them to Kinesis
    Data Streams, it does not force the caller application to block and wait for a confirma-
    tion that the record has arrived. Option E is incorrect as the KPL can incur an additional
    processing delay of up to RecordMaxBufferedTime within the library (user-configurable).
    Larger values of RecordMaxBufferedTime result in higher packing efficiencies and better
    performance. Option F is incorrect because RecordMaxBufferedTime is user configurable
    in the library.

  25. B, C, F. Option A is incorrect because while KCL is a Java library, support for languages
    other than Java is provided using a multi-language interface called the MultiLangDaemon.
    Option D is incorrect because KCL does provide a checkpoint mechanism. Option E is
    incorrect because KCL instantiates a record processor for each shard.

  26. B, C, E. Option A is incorrect as the metric is BytesSent rather than MBs Sent. Option D
    is incorrect as the metric is RecordSendErrors rather than RecordSendFailures. Option F is
    incorrect as the metric is ServiceErrors rather than ServiceFailures.

  27. A, C. The ReadProvisionedThroughputExceeded error occurs when GetRecords calls are
    throttled by Kinesis Data Streams over a duration of time. Your Amazon Kinesis data
    stream can throttle if the following limits are breached:

    Each shard can support up to five read transactions per second (or five GetRecords calls/
    second for each shard).

    Each shard can support up to a maximum read rate of 2 MiB/second.

    GetRecords can retrieve up to 10 MiB of data per call from a single shard and up to 10,000
    records per call. If a call to GetRecords returns 10 MiB of data, subsequent calls made
    within the next 5 seconds result in an error.

  28. A, B, D. GetRecords.Latency can increase if there is an increase in record count or record
    size for each GET request. If you tried to restart your application while the producer was
    ingesting data into the stream, records can accumulate without being consumed. This
    increase in the record count or amount of data to be fetched increases the value for
    GetRecords.Latency. Additionally, if an application is unable to catch up with the ingestion
    rate, the IteratorAge gets increased. Note: Enabling server-side encryption on your Kinesis
    data stream can also increase your latency.

  29. B, C. Option A is incorrect as rate limiting is possible with Kinesis. Option D is incorrect as
    rate limiting is implemented using a token bucket algorithm with separate buckets for both
    Kinesis Data Streams records and bytes. Option E is incorrect as this threshold is configu-
    rable but by default is set 50 percent higher than the actual shard limit, to allow shard satu-
    ration from a single producer.

  30. B. Amazon Kinesis Data Streams supports changes to the data record retention period of
    your stream. A Kinesis data stream is an ordered sequence of data records meant to be writ-
    ten to and read from in real time. Data records are therefore stored in shards in your stream
    temporarily. The time period from when a record is added to when it is no longer acces-
    sible is called the
    retention period. A Kinesis data stream stores records from 24 hours by
    default, configurable up to 168 hours.

    Introduction xlv


  31. A, B. The KPL supports two types of batching:

    Aggregation – Storing multiple records within a single Kinesis Data Streams record

    Collection – Using the API operation PutRecords to send multiple Kinesis Data Streams
    records to one or more shards in your Kinesis data stream

  32. A, C, D. Kinesis Data Firehose can send records to Amazon Simple Storage Service
    (Amazon S3), Amazon Redshift, Amazon Elasticsearch Service (Amazon ES), and any
    HTTP endpoint owned by you or any of your third-party service providers, including
    Datadog, New Relic, and Splunk.

  33. D. Kinesis Data Firehose can invoke your lambda function to transform incoming source
    data and deliver the transformed data to destinations. You can enable Kinesis Data Firehose
    data transformation when you create your delivery stream.

  34. A, B. Kinesis Firehose can convert data to Parquet and ORC. Option C is incorrect as it is
    unsupported. Option D is incorrect as it is not a file format.

  35. B. The most common cause of skipped records is an unhandled exception thrown from
    processRecords. The Kinesis Client Library (KCL) relies on your processRecords code to
    handle any exceptions that arise from processing the data records. Any exception thrown
    from processRecords is absorbed by the KCL.

  36. B, C, E. If you have sensitive data, you can enable server-side data encryption when you use
    Amazon Kinesis Data Firehose. How you do this depends on the source of your data.

  37. B, D. It’s not possible to use AWS Glue triggers to start a job when a crawler run completes.
    Use one of the following methods instead:


    Create an AWS Lambda function and an Amazon CloudWatch Events rule. When you
    choose this option, the lambda function is always on. It monitors the crawler regardless of
    where or when you start it. You can also modify this method to automate other AWS Glue
    functions.

    For more information, see “How can I use a Lambda function to automatically start an
    AWS Glue job when a crawler run completes?
    https://aws.amazon.com/premiumsupport/knowledge-center/start-glue-job-
    crawler-completes-lambda”

    Use AWS Glue workflows. This method requires you to start the crawler from the Work-
    flows page on the AWS Glue console. For more information, see “How can I use AWS Glue
    workflows to automatically start a job when a crawler run completes?”
    https://aws.amazon.com/premiumsupport/knowledge-center/

    start-glue-job-after-crawler-workflow

  38. B,C. To use an external library in an Apache Spark ETL job:

    1. Package the library files in a .zip file (unless the library is contained in a single

      .py file).

    2. Upload the package to Amazon Simple Storage Service (Amazon S3).

    3. Use the library in a job or job run.

      xlvi Introduction


  39. C. Amazon Simple Queue Service (Amazon SQS) offers a secure, durable, and avail-
    able hosted queue that lets you integrate and decouple distributed software systems and
    components.

  40. A, B. Amazon SQS uses queues to integrate producers and consumers of a message. Pro-
    ducers write messages to a queue, and consumers pull messages from the queue. A queue is
    simply a buffer between producers and consumers. We have two types of queues:

    Standard queue – Highly scalable with maximum throughput, best-effort ordering and at-
    least-once delivery semantics

    FIFO Queue – Exactly-once semantics with guaranteed ordering with lesser scalability than
    standard queue

  41. D. You can use a combination of the following preconditions: DynamoDBDataExists
    checks whether data exists in a specific DynamoDB table. DynamoDBTableExists checks
    whether a DynamoDB table exists.

  42. D. DynamoDBDataNode defines a data node using DynamoDB, which is specified as an
    input to a HiveActivity or EMRActivity object. S3DataNode defines a data node using
    Amazon S3. By default, the S3DataNode uses server-side encryption. If you would like to
    disable this, set s3EncryptionType to NONE.

  43. D. Server access logging provides detailed records for the requests that are made to a
    bucket. Server access logs are useful for many applications. For example, access log
    information can be useful in security and access audits.

  44. C. You can send 3,500 PUT/COPY/POST/DELETE and 5,500 GET/HEAD requests per
    second per partitioned prefix in an S3 bucket. When you have an increased request rate to
    your bucket, Amazon S3 might return 503 Slow Down errors while it scales to support the
    request rate. This scaling process is called partitioning.
    https://aws.amazon.com/premiumsupport/knowledge-center/

    s3-resolve-503-slowdown-throttling

  45. C. Option A is incorrect as Amazon Redshift is a relational database service. Option
    B is incorrect as Elasticsearch is not a NoSQL service. Option C is correct as Amazon

    DynamoDB is a NoSQL service. Option D is incorrect as Amazon DocumentDB is a docu-
    ment database, but not fully managed.

  46. D. You can create on-demand backups and enable point-in-time recovery (PITR) for your
    Amazon DynamoDB tables. Point-in-time recovery helps protect your tables from acci-
    dental write or delete operations. With point-in-time recovery, you can restore that table
    to any point in time during the last 35 days. For more information, see “Point-in-Time
    Recovery: How It Works.”
    https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/
    PointInTimeRecovery.html

    Introduction xlvii


  47. A, B. Provisioned mode is a good option if any of the following are true:
    You have predictable application traffic.

    You run applications whose traffic is consistent or ramps gradually.
    You can forecast capacity requirements to control costs.

  48. B, C. When you create a table, in addition to the table name, you must specify the primary
    key of the table. The primary key uniquely identifies each item in the table, so that no two
    items can have the same key.

    DynamoDB supports two different kinds of primary keys:

    Partition key – A simple primary key, composed of one attribute known as the partition key

    Composite primary key – This type of key is composed of two attributes. The first attribute
    is the partition key, and the second attribute is the sort key.

    DynamoDB uses the partition key value as input to an internal hash function.

  49. B. Non-projected attributes in DynamoDB will result in DynamoDB fetching the attributes
    from base table, resulting in poor performance.

  50. A. You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of
    frequently accessed data subsets locally. Cached volumes offer a substantial cost savings on
    primary storage and minimize the need to scale your storage on-premises. You also retain
    low-latency access to your frequently accessed data.

  51. B. If you need low-latency access to your entire dataset, first configure your on-premises
    gateway to store all your data locally. Then asynchronously back up point-in-time snap-
    shots of this data to Amazon S3. This configuration provides durable and inexpensive

    off-site backups that you can recover to your local data center or Amazon Elastic Compute
    Cloud (Amazon EC2). For example, if you need replacement capacity for disaster recovery,
    you can recover the backups to Amazon EC2.

  52. B. Amazon Athena is an interactive query service that makes it easy to analyze data in
    Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to man-
    age, and you pay only for the queries that you run.

  53. A, B, D. Amazon Athena supports a wide variety of data formats, such as CSV, TSV,
    JSON, and textfiles and also supports open-source columnar formats such as Apache ORC
    and Apache Parquet. Athena also supports compressed data in Snappy, Zlib, LZO, and
    GZIP formats. By compressing, partitioning, and using columnar formats, you can improve
    performance and reduce your costs.

  54. D. Leader node is an Amazon Redshift terminology. Master node is the current term for an
    EMR cluster.

  55. A, D, F. Please read section “Redshift Architecture” in Chapter 4, “Data Processing and
    Analytics.”

Assessment Test 27


Assessment Test

The following assessment test will give you a general idea about your analytics skillset on
AWS and can identify areas where you should apply more focus. This test touches upon
some basic concepts but will give you an indication of what types of questions you can
expect from the AWS Analytics Certified Data Specialty certification exam.

  1. An organization looking to build a real-time operational analytics dashboard for its mobile
    gaming application is looking at various options to build the dashboard. Which of the fol-
    lowing options will provide the right performance characteristics for such an application?

    1. Use Amazon S3 to power the dashboard.

    2. Use Amazon Redshift to power the dashboard.

    3. Use Amazon Elasticsearch Service to power the dashboard.

    4. Use Amazon DynamoDB to power the dashboard.

  2. An administrator has a 6 GB file in Amazon S3. The administrator runs a nightly COPY
    command into a 2-node (32 slices) Amazon Redshift cluster. The administrator wants to
    prepare the data to optimize performance of the COPY command. What is the best way for
    the administrator to prepare the data?

    1. Compress the file using gzip compression.

    2. Split the file into 64 files of equal size.

    3. Split the file into 500 smaller files.

    4. Split the file into 32 files of equal size.

  3. A customer wants to build a log analytics solution on AWS with sub-second latency for the
    search facility. An additional requirement is to build a dashboard for the operations staff.
    Which of the following technologies would provide a more optimal solution?

    1. Store the logs in Amazon S3 and use Amazon Athena to query the logs. Use Amazon
      QuickSight to build the dashboard.

    2. Store the logs in Amazon Redshift and use Query Editor to access the logs. Use
      Amazon QuickSight to build the dashboard.

    3. Store the logs in Amazon Elasticsearch Service and use Kibana to build a dashboard.

    4. Store the logs in HDFS on an EMR cluster. Use Hive to query the logs. Use Amazon
      QuickSight to build the dashboard.

  4. A leading telecommunications provider is moving to AWS and has around 50 TB of data in
    its on-premises Hadoop environment, stored in HDFS, and is using Spark SQL to analyze
    the data. The customer has asked for a cost-effective and efficient solution to migrate onto
    AWS quickly, over a 100 mbps (megabits per second) connection, to be able to build a cat-
    alog that can be used by other services and analyze the data while managing as few servers
    as possible. Which solution would be best for this customer?

    1. Migrate the data using S3 commands using CLI interface to Amazon S3, use AWS Glue
      to crawl the data and build a catalog, and analyze it using Amazon Athena.

      28 Chapter 1 History of Analytics and Big Data


    2. Migrate the data to S3 using AWS Snowball, use AWS Glue to crawl the data and build
      a catalog, and analyze it using Amazon Athena.

    3. Migrate the data using Amazon Snowball to Amazon S3, use Hive on EMR running
      Spark to build a catalog, and analyze the data using Spark SQL.

    4. Migrate the data using CLI interface to AMAZON S3, use Hive on EMR running
      Spark to build a catalog, and analyze the data using Spark SQL.

  5. A leading financial organization is looking to migrate its enterprise data warehouse to
    AWS. It has 30 TB of data in its data warehouse but is only using 500 GB for reporting and
    dashboarding while the remaining data is occasionally required for compliance purposes.
    Which of the following is a more cost-effective solution?

    1. Migrate the data to AWS. Create an EMR cluster with attached HDFS storage hosting
      the 30 TB of data. Use Amazon QuickSight for dashboarding and Hive for compliance
      reporting requirements.

    2. Migrate the data to Amazon S3. Create a Redshift cluster hosting the 30 TB of data.
      Use QuickSight for dashboarding and Athena for querying from S3.

    3. Migrate the data to Amazon S3. Create a Redshift cluster hosting the 500 GB of data.
      Use Amazon QuickSight for dashboarding and Redshift Spectrum for querying the
      remaining data when required.

    4. Migrate the data to AWS in an Amazon Elasticsearch Service cluster. Use Kibana for
      dashboarding and Logstash for querying the data from the ES cluster.

  6. An upcoming gaming startup is collecting gaming logs from its recently launched and
    hugely popular game. The logs are arriving in JSON format with 500 different attributes
    for each record. The CMO has requested a dashboard based on six attributes that indicate
    the revenue generated based on the in-game purchase recommendations as generated by the
    marketing departments ML team. The data is on S3 in raw JSON format, and a report is
    being generated using QuickSight on the data available. Currently the report creation takes
    an hour, whereas publishing the report is very quick. Furthermore, the IT department has
    complained about the cost of data scans on S3.

    They have asked you as a solutions architect to provide a solution that improves
    performance and optimizes the cost. Which of the following options is most suitable to
    meet the requirements?

    1. Use AWS Glue to convert the JSON data into CSV format. Crawl the converted CSV
      format data with Glue Crawler and build a report using Amazon Athena. Build the
      front end on Amazon QuickSight.

    2. Load the JSON data to Amazon Redshift in a VARCHAR column. Build the report
      using SQL and front end on QuickSight.

    3. Load the JSON data to Amazon Redshift. Extract the reportable attributes from the
      JSON column in the tables. Build the report using SQL and front end on QuickSight.

    4. Use AWS Glue to convert the JSON data into Parquet format. Crawl the converted
      Parquet format with Glue Crawler, and build a report using Athena. Build the front
      end on Amazon QuickSight.

      References 29


  7. A leading manufacturing organization is running a large Redshift cluster and has com-
    plained about slow query response times. What configuration options would you check to
    identify the root cause of the problem?

    1. The number and type of columns in the table

    2. Primary and secondary key constraints

    3. Alignment of the table’s sort key with predicates in the SELECT statement

    4. The number of rows in the table

    5. The partition schema for the database

  8. Your customer has asked you to help build a data lake on S3. The source system is MySQL,
    Oracle, and standard CSV files. The data does not have any incremental key. The customer
    expects you to capture the initial data dump and changes during the data lake build phase.
    What tools/technologies would you recommend to reduce the overall cost of migration?

    1. Load the initial dump and changes using AWS Glue.

    2. Load the initial dump and changes using DMS (Database Migration Service).

    3. Load the initial dump with AWS Glue and capture changes with AWS Glue and DMS
      (Database Migration Service).

    4. Load the initial dump using DMS (Database Migration Service) and changes in the
      data with AWS Glue.

  9. A QuickSight dashboard allows you to view the source data but not make any changes to it.

    1. True

    2. False

  10. Amazon QuickSight can interpret your charts and tables for you and suggest insights in
    plain English.

    1. True

    2. False


References

  1. www.weforum.org/agenda/2015/02/a-brief-history-of-big-data-everyone-
    should-read/

  2. www.kdnuggets.com/2017/07/4-types-data-analytics.html

  3. research.google.com/archive/gfs-sosp2003.pdf

  4. research.google.com/archive/mapreduce-osdi04.pdf

  5. en.wikipedia.org/wiki/Amazon_Web_Services

  6. open.blogs.nytimes.com/2007/11/01/self-service-prorated-super-
    computing-fun
    /

  7. AWS Re-Invent Presentation by Siva Raghupathy in 2017 www.youtube.com/
    watch?v=a3713oGB6Zk

88 Chapter 2 Data Collection


Review Questions

The following review questions will help your understanding of the contents in this chapter.
Please do read all the content mentioned in the references to be better prepared for the
actual exam.

  1. You work for a large ad-tech company which has a set of predefined ads displayed routinely.
    Due to the popularity of your products, your website is getting popular garnering attention
    of a diverse set of visitors. You are currently placing dynamic ads based on user click data;
    however, you have discovered the process time is not keeping up to display the new ads
    since a user’s stay on the website is short-lived (few seconds) compared to your turnaround
    time for delivering a new ad (> 1 minute). You have been asked to evaluate AWS platform
    services for possible solution to analyze the problem and reduce overall add serving time.
    Which of the below is your recommendation?

    1. Push the click stream data to an Amazon SQS queue. Have your application subscribe
      to the SQS queue, and write data to an Amazon RDS instance. Perform analysis using
      SQL.

    2. Move the website to be hosted in AWS and use AWS Kinesis to dynamically process
      user click stream in real-time.

    3. Push web clicks to Amazon Kinesis Fire hose and analyze with kinesis analytics or
      Kinesis Client Library.

    4. Push web clicks to Amazon Kinesis Stream and analyze with Kinesis Analytics or
      Kinesis Client Library (KCL).

  2. An upcoming startup with self-driving delivery trucks fitted with embedded sensors has
    requested that you capture data arriving in near real time and track the vehicles’ movement
    within the city. You will be capturing information from multiple sensors with data that
    includes the vehicle identification number, make, model, color and its GPS coordinates.
    Data is sent every 2 seconds and needs to be processed in near real time. Which of the fol-
    lowing tools can you use to ingest the data and process it in near real time?

    1. Amazon Kinesis

    2. AWS Data Pipeline

    3. Amazon SQS

    4. Amazon EMR

  3. Which of the statements are true about AWS Glue crawlers? (Choose three.)

    1. AWS Glue crawlers provide built in classifiers that can be used to classify any type of
      data.

    2. AWS Glue crawlers can connect to Amazon S3, Amazon RDS, Amazon Redshift,
      Amazon DynamoDB, and any JDBC sources.

    3. AWS Glue crawlers provide the option of custom classifiers which provide options to
      classify data that cannot be classified by built-in classifiers.

    4. AWS Glue crawlers write metadata to AWS Glue Data Catalog.

      Review Questions 89


  4. You have been tasked to work on a new and exciting project where data is coming from
    smart sensors in the kitchen and is sent to the AWS platform. You have to ensure that you
    can filter and transform the data being received from the sensors before storing them in
    DynamoDB. Which of the following AWS service is ideally suited for this scenario?

    1. IoT Rules Engine

    2. IoT Device Shadow service

    3. IoT Message Broker

    4. IoT Device Shadow

  5. You are working for a fortune-500 financial institution which is running its databases
    on Microsoft Azure. They have decided to move to AWS and are looking to migrate their
    SQL databases to Amazon RDS. Which of the following services can simplify the migra-
    tion activity?

    1. Amazon Kinesis

    2. Managed Streaming for Kafka

    3. AWS Glue

    4. AWS Data Migration Service

  6. Your customer has an on-premises Cloudera cluster and is looking to migrate the workloads
    to the AWS platform. The customer does not want to pay the licensing and any fixed cost.
    His objective is to build a serverless pipeline in a pay-as-you-go model, ensuring that there
    is limited impact of migration on existing PySpark code. What would be your recommenda-
    tion from the following options?

    1. Migrate the on-premises Cloudera cluster to the AWS platform by running it on EC2
      instances.

    2. Migrate the on-premises Cloudera cluster to a long-running Amazon EMR cluster.

    3. Migrate the on-premises PySpark code to a transient Amazon EMR cluster.

    4. Migrate the code to AWS Glue.

  7. You are working for a ride-hailing company that has recently ventured into food delivery
    and order processing. The order processing system is built on AWS. The order processing
    runs into scaling issues particularly during lunch and dinner times when there is an exces-
    sive amount of orders received. In the current infrastructure, you have EC2 instances that
    pick up the orders from the application and EC2 instances in an Auto Scaling group to
    process the orders. What architecture would you recommended to ensure that the EC2
    processing instances are scaled correctly based on the demand?

    1. Use SQS queues to decouple the order receiving and order processing components of
      the architecture. Scale the processing servers based on the queue length.

    2. Use SQS queues to decouple the order receiving and order processing components of
      the architecture. Scale the processing servers based on notifications sent from the SQS
      queues.

    3. Use CloudWatch metrics to understand the load capacity on the processing servers.
      Ensure that SNS is used to scale up the servers based on notifications.

    4. Use CloudWatch metrics to understand the load capacity on the processing servers and
      then scale the capacity accordingly.

      90 Chapter 2 Data Collection


  8. Your CIO has recently announced that you will be migrating to AWS to reduce your costs
    and improve overall agility by benefitting from the breadth and depth of the platform.

    You have a number of on-premises Oracle databases, which are not only expensive from a
    licensing perspective but also difficult to scale. You have realized that you can get the same
    performance from Amazon Aurora at 1/10 of the cost and hence would like to proceed with
    the migration. You want to create the schema beforehand on the AWS My SQL instance.

    Which of the following is the easiest approach in getting this done?

    1. Use the AWS Database Generation Tool to generate the schema in the target database.

    2. Use the AWS Schema Conversion Tool to generate the schema in the target database.

    3. Create scripts to generate the schema in the target database.

    4. Use the AWS Config Tool to generate the schema in the target database.

  9. Which of the following can be used to move extremely large amounts of data to AWS with
    up to 100 PB per device?

    1. AWS Snowmobile

    2. AWS Snowball

    3. AWS S3 Export

    4. AWS S3 Transfer Acceleration

    5. AWS Direct Connect

  10. You have recently moved to AWS but still maintain an on-premises data center. You have
    already migrated your BI/analytics and DWH workloads to Amazon Redshift and now
    need to migrate large volumes of data to Redshift to ensure that the weekly reports have
    fresh data. Which AWS-managed service can be used for this data transfer in a simple, fast,
    and secure way? (Choose two.)

    1. Direct Connect

    2. Import/Export to AWS

    3. Data Pipeline

    4. Snowball


References

  1. aws.amazon.com/iot/

  2. https://docs.aws.amazon.com/streams/latest/dev/introduction.html

  3. https://docs.aws.amazon.com/glue/latest/dg/what-is-glue.html

  4. https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/
    what-is-datapipeline.html

  5. https://docs.aws.amazon.com/snowball/latest/snowcone-guide/index.html

  6. https://docs.aws.amazon.com/snowball/latest/developer-guide/index.html

140 Chapter 3 Data Storage


Review Questions

  1. You need a cost-effective solution to store a large collection of audio, video, and PDF
    files and provide users with the ability to track and analyze all your data efficiently using
    your existing business intelligence tools. Which of the following would form the solution
    required to fulfill the requirements?

    1. Store the data in Amazon Dynamo DB and reference its location in Amazon Redshift.
      Amazon Redshift will keep track of metadata about your audio, video, and PDF files,
      but the files themselves would be stored in Amazon S3.

    2. Store the data in Amazon S3 and reference its location in Amazon Redshift. Amazon
      Redshift will keep track of metadata about files. but the files themselves would be
      stored in Amazon 53.

    3. Store the data in Amazon S3 and reference its location in Amazon Dynamo DB. Use
      Amazon DynamoDB only for the metadata, but the actual files will remain stored in
      Amazon S3.

    4. Store the data in Amazon S3 and reference its location in HDFS on Amazon EMR.
      Amazon EMR will keep track of metadata about your files, but the files themselves
      would be stored in Amazon 53.

  2. You have recently joined an online video streaming company that is looking to stream video
    files onto Amazon S3. Which of the following services can be used to deliver real-time
    streaming data to S3 with little to no coding required. Please select one option.

    1. Spark Streaming on Amazon EMR

    2. Amazon Kinesis Data Firehose

    3. Amazon Kinesis Data Streams

    4. Amazon Redshift

    5. Amazon EMR

  3. You have recently joined a new gaming company as data architect. The company’s latest
    game, Chocolate Bites, has been an overwhelming success, resulting in log files. You have
    been asked to ensure that the log files are made available for access at the cheapest price
    point. The data will be accessed once every few weeks, but it needs to be readily available
    when the access request is made. You have realized that S3 is a good option for such a sce-
    nario. Which of the following S3 storage option should you use?

    1. AWS S3 Standard - Infrequent Access

    2. AWS S3 Standard

    3. AWS Glacier

    4. AWS Reduced Redundancy Storage

      Review Questions 141


  4. You are working with a team of engineers who are using DynamoDB to build the leader-
    board for their online multiplayer gaming application. In order to boost performance of
    reads, a caching layer is being considered. Which of the following is a caching service com-
    patible with your DynamoDB-based application?

    1. Memcached

    2. DAX

    3. Redis

    4. ElastiCache

  5. Rogers Inc, a video rental company, has set up an online site to make its rentals available to
    a wider community. It ships the latest videos that are not available on other streaming sites
    and charges a small percentage in addition to the shipping costs. The company has its web-
    site on AWS and is using DynamoDB behind its web application. The database has a main
    table called videos, which contains two attributes,
    videoid and the subscriberid, the user
    who has rented the video. You are required to select a primary key for this table to optimize
    access based on the subscriber identifier. Which of the following would you use as a pri-
    mary key for this table?

    1. videoid, where there is a single video with lots of subscribers

    2. subscriberid, where there are lots of subscribers to a single video

    3. Genre, where there are few genres to a huge number of videos

    4. None of the above

  6. Which of the following statements about Amazon DocumentDB are true? (Choose two.)

    1. Amazon DocumentDB is Cassandra compatible.

    2. Amazon DocumentDB is MongoDB compatible.

    3. Amazon DocumentDB can scale up to 10 TB per cluster.

    4. Amazon DocumentDB can scale up to 64 TB per cluster.

  7. Amazon DocumentDB can only run in a VPC. Is this statement true or false?

    1. True

    2. False

  8. Which of the following is an ideal use case for a graph database like Amazon Neptune?

    1. Fraud detection

    2. Recommendation engines

    3. Knowledge graph

    4. All of the above

Review Questions 235


Review Questions

  1. You are working as an Enterprise Architect for a large fashion retailer based out of
    Madrid, Spain. The team is looking to build ETL and have large datasets that need to
    be transformed. Data is arriving from a number of sources and hence de-duplication
    is also an important factor. Which of the following is the simplest way to process
    data on AWS?

    1. Load data into Amazon Redshift, and build transformations using SQL. Build custom
      de-duplication script.

    2. Using AWS Glue to transform the data using built-in FindMatches ML Transform

    3. Load data into Amazon EMR, build Spark SQL scripts, and use custom de-duplication
      script.

    4. Use Amazon Athena for transformation and de-duplication.

  2. Which of the following is a distributed data processing option on Apache Hadoop and was
    the main processing engine until Hadoop 2.0?

    1. Map Reduce

    2. YARN

    3. Hive

    4. Zoo Keeper

  3. You are working as a consultant for a telecommunications company. The data scientists
    have requested direct access to the data to dive deep into the structure of the data and build
    models. They have good knowledge of SQL. Which of the following tools will you choose
    to provide them direct access to the data and reduce the infrastructure and maintenance
    overhead, while ensuring that access to data on Amazon S3 can be provided?

    Which of the following would you recommend to them? (Choose One.)

    1. Amazon S3 Select

    2. Amazon Athena

    3. Amazon Redshift

    4. Apache Presto on Amazon EMR


  4. Which of the following file formats are supported by Amazon Athena? (Choose Three.)

    1. Apache Parquet

    2. CSV

    3. DAT

    4. Apache ORC

    5. Apache AVRO

    6. TIFF

      236 Chapter 4 Data Processing and Analysis


  5. You are working for a large utilities company which has deployed smart meters across its
    customer base. They are getting near real-time usage data from their customers and ingest-
    ing it into Amazon S3 via Amazon Kinesis. They were previously running some large scale
    transformations using PySpark on their on-premises Hadoop cluster. They have the Pyspark
    application available and expect no change other than input and output parameters while
    running the job. They are looking to reuse their code as much as possible, while looking at
    the possibility to tune the environment specific to their workload.

    Which of the following is the right data processing choice for their workload that meets the
    customers’ requirements at the lowest cost? (Choose One.)

    1. Run the data processing on AWS Glue using the PySpark code.

    2. Run the data processing on Amazon EMR using Cluster mode.

    3. Run the data processing on Amazon EMR using Step execution mode using on-
      demand instances.

    4. Run the data processing on Amazon EMR using Step execution mode to utilize Spot
      instances.

  6. You are looking to run large scale data processing jobs on Amazon EMR running in a step-
    execution mode. The data processing jobs can be run at any time with input data available
    on Amazon S3. Which of the following options will ensure that the data remains available,
    provides a consistent view, and is encrypted for protection during and after the cluster is
    terminated after the completion of steps? (Choose One.)

    1. Use HDFS.

    2. Use EMRFS.

    3. Use Local disk on the EMR EC2 instances.

    4. Use EBS volumes.

  7. You are working for a large ecommerce retailer who would like to search the web-logs for
    specific error codes and their reference numbers. You have the ability to choose any tool
    from the AWS stack. Which of the following tools would you highly recommend for this
    use-case? (Choose One.)

    1. Amazon Redshift

    2. Apache Hive on Amazon EMR

    3. Apache Presto on Amazon EMR

    4. Amazon Elastic Search

  8. Your customer is all in on AWS and most of the data has high velocity using Amazon S3,
    Amazon Kinesis Data Streams, Amazon Kinesis Data Fireshose, and Amazon DynamoDB.
    They are looking to analyze this streaming data and are contemplating choosing a service
    from the AWS stack. Which of the following services will you recommend to analyze this
    data? (Choose One.)

    1. Amazon Redshift

    2. Apache Hive on Amazon EMR

      References 237


    3. Apache Pig on Amazon EMR

    4. Amazon Elastic Search

  9. You are looking to build a Datawarehouse solution with the ability to flexibly transfer your
    data between data lake and Datawarehouse. Which of the following is the most cost effec-
    tive way to meet your requirements? (Choose One.)

    1. Use S3 as your Datalake and Amazon EMR as your Datawarehouse.

    2. Use HDFS as your Datalake and Amazon Redshift as your Datawarehouse.

    3. Use S3 as your Datalake and Amazon Redshift as your Datawarehouse.

    4. Use HDFS as your Datalake and Amazon EMR as your Datawarehouse.

  10. Which of the following statements are true about Redshift Leader nodes? (Choose Two.)

    1. Redshift cluster can have a single leader node.

    2. Redshift cluster can have more than one leader node.

    3. Redshift Leader node should have more memory than the compute nodes.

    4. Redshift Leader node has the exact same specifications as the compute nodes.

    5. You can choose your own Leader node sizing and it is priced separately.

    6. Redshift leader node is chosen automatically and is free to the users.


References


Recommended Workshops

Data Ingestion and Processing workshop: dataprocessing.wildrydes.com

Incremental Data Processing on Amazon EMR (Apache Hudi):

incremental-data-processing-on-amazonemr.workshop.aws/en

Serverless Data Lake workshop: incremental-data-processing-on-amazonemr

.workshop.aws/en

Data Engineering 2.0: aws-dataengineering-day.workshop.aws/en

Amazon Athena workshop: athena-in-action.workshop.aws

Amazon EMR with Service Catalog: s3.amazonaws.com/kenwalshtestad/cfn/
public/sc/bootcamp/emrloft.html

Realtime Analytics and Serverless DataLake Demos: demostore.cloud

Streaming Analytics workshop: streaming-analytics.workshop.aws/en

Review Questions 275


Review Questions

  1. Which of the following business analytics services is provided by AWS?

    1. Micros Strategy

    2. Business Objects

    3. Tableau

    4. QuickSight

  2. You are looking to understand the correlation between the price vs. quantity sold of various
    products within your merchandise? Which is the best charting technique to display this?

    1. Combo charts

    2. Donut charts

    3. Heat maps

    4. Scatter plots

  3. True or false? Amazon QuickSight Standard edition allows you to provide access to users in
    your Microsoft Active Directory groups.

    1. True

    2. False

  4. You have recently been hired by a large e-commerce organization that is looking to fore-
    cast its annual sales in order to optimize the inventory storage. Which of the following
    services will provide the organization with the simplest forecasting option allowing native
    integration with data from S3, Redshift, and Athena?

    1. D3.js

    2. MicroStrategy

    3. Kibana

    4. Amazon QuickSight

  5. A large manufacturing organization has started capturing data from sensors across the
    assembly lines. It is looking to perform operational analytics and build real-time dash-
    boards based on specific index patterns and looking at ultra-low latency. The data is avail-
    able in Elasticsearch. Which of the following visualization techniques is best suited for such
    an organization?

    1. Tableau

    2. EMR Notebooks

    3. Kibana

    4. Amazon QuickSight

336 Chapter 6 Data Security


Review Questions

  1. You are a data architect working for a large multinational organization that allows various
    suppliers to access inventory details available as files in your S3 bucket. The supplier has an
    AWS account. How can you provide access to the vendor to this bucket?

    1. Create a new IAM group and grant the relevant access to the supplier on that bucket.

    2. Create a cross-account role for the supplier account and grant that role access to the S3
      bucket.

    3. Create a new IAM user and grant the relevant access to the supplier on that bucket.

    4. Create an S3 bucket policy that allows the supplier to read from the bucket from their
      AWS account.

  2. You are a data engineer who has been responsible for building a data warehouse using
    Redshift. You have set up the analysts with Redshift client applications running on an EC2
    instance to access Redshift. The analysts have complained about the inability to access the
    Redshift cluster. Which of the following will you do to ensure proper access to the
    Redshift cluster?

    1. Use the AWS CLI instead of the Redshift client tools.

    2. Modify the NACL on the subnet.

    3. Modify the VPC security groups.

    4. Attach the proper IAM role to the Redshift cluster for proper access to the EC2 in-
      stance.

  3. You are working for a mid-sized tractor manufacturing company, which is providing access
    to the latest sales analytics stored on Redshift to its sales advisors’ mobile devices. Appli-
    cations on those mobile devices will need access to Amazon Redshift where data marts are
    housed for large-scale analytics. Which of the following is the simplest and most secure way
    to provide access to your Redshift data store from your mobile application?

    1. Create a user in Redshift and provide the credentials to the mobile application.

    2. Allow a web identity federated user to assume a role that allows access to the Redshift
      tables and data marts by providing temporary credentials using STS.

    3. Create an IAM user and generate encryption keys for that user. Provide the user access
      to Redshift and hard-code the user credentials in the mobile application.

    4. Create a Redshift read-only access policy in IAM and use the policy within your mo-
      bile application.

  4. True or false? Amazon Kinesis Firehose does not offer server-side encryption.

    1. True

    2. False

      References and Further Reading 337


  5. Which of the following can be used to restrict user access to Athena operations, including
    Athena Workgroups?

    1. Athena Workgroups

    2. Athena Federated Query

    3. AWS Glue Catalog

    4. IAM (Identity and Access Management)


References and Further Reading

“Using AWS Marketplace for ML Workloads” – aws.amazon.com/blogs/
awsmarketplace/using-aws-marketplace-for-machine-learning-workloads


“Setting up trust between ADFS and AWS using Active Directory Credentials to
connect to Amazon Athena with ODBC driver” –
aws.amazon.com/blogs/big-data/
setting-up-trust-between-adfs-and-aws-and-using-active-directory-
credentials-to-connect-to-amazon-athena-with-odbc-driver


“How to rotate Amazon Redshift Credentials in AWS Secrets Manager” – aws.amazon

.com/blogs/security/

how-to-rotate-amazon-documentdb-and-amazon-redshift-credentials-in-aws-
secrets-manager


“Enabling Serverless security analytics using AWS WAF full logs, Amazon Athena, and
Amazon QuickSight” –
aws.amazon.com/blogs/security/

enabling-serverless-security-analytics-using-aws-waf-full-logs/


“Amazon QuickSight now supports audit logging with AWS CloudTrail” – aws

.amazon.com/blogs/security/

amazon-quicksight-now-supports-logging-with-aws-cloudtrail

AWS Certified Data Analytics Study Guide: Specialty (DAS-C01) Exam, First Edition. Asif Abbasi.

© 2021 John Wiley & Sons, Inc. Published 2021 by John Wiley & Sons, Inc.


Answers

to Review Questions


Appendix

image

340 Appendix Answers to Review Questions


Chapter 1: History of Analytics
and Big Data

  1. C. A is incorrect as Amazon S3 is great for storing massive amounts of data but is not suit-
    able for real-time data ingestion and visualization.

    B is incorrect as Amazon Redshift is a good tool for large-scale data inserts and scans.
    However, real-time querying is not a great use case for Amazon Redshift due to potential
    concurrency limits.

    C is correct as Elasticsearch is the right technology for operational analytics.

    D is incorrect as Amazon DynamoDB is a technology that can provide sub-second latency
    for your applications, and it is recommended when you have well-known access paths. Typ-
    ical dashboard applications run a scan of the data, which is not a key-based access, and
    hence DynamoDB is not a great choice.

  2. B. A is incorrect because, while you can load the gzipped data, gzip is not a splittable file,
    and hence you will not benefit from the system parallelism.

    B is correct as it is a best practice to split the data into multiple files. Please read AWS doc-
    umentation.

    https://docs.aws.amazon.com/redshift/latest/dg/t_splitting-data-files

    .html

    Also, because you have multiple slices, it is better to have the number of your files as a mul-
    tiple of Redshift slice count.

    C is incorrect as dividing into a large number of smaller files would result in slow S3 listing
    operation and ineffective utilization of the S3 reads. Each block read will read a consider-
    ably small amount of data, resulting is unnecessary reads.

    D is a plausible option. However, B is better due to the number of files being a multiple of
    Redshift slice count. If we did not have B as an option, D would have been correct.

  3. C. A is incorrect, as while the solution will help build a dashboard, the sub-second latency
    requirement would not be met. Amazon S3 and Athena are good for ad hoc reporting.
    However, when the data is queried frequently, the latency is higher, and furthermore, the
    cost involved in frequent data scans would make this an invalid architecture.

    B is incorrect as Redshift is great for building data marts and data warehouses, but a
    sub-second latency is not possible considering you will have to connect using JDBC from
    QuickSight.

    C is correct as Amazon Elasticsearch is the right approach. Elasticsearch is a great tool for
    near real-time operations (
    aws.amazon.com/elasticsearch-service/

    what-is-elasticsearch). Using Kibana would help you create a dashboard with

    lower TCO.

    D is incorrect as Hive cannot be used to query the logs in a sub-second latency. Hive is a
    better tool for batch operations.

    Chapter 1: History of Analytics and Big Data 341


  4. B. A is incorrect. Migrating 50 TB of data using CLI interface for Amazon S3 on an Inter-
    net connection will be time consuming. Since the requirement is to perform this migration
    quickly, this solution will not work.

    B is correct as the data migration with Snowball will be a better option. Once the data is in
    S3, crawling with Glue to build a catalog and analyzing using Athena would be preferable
    because the customer wanted to manage as few servers as possible, and both Athena and
    Glue are serverless.

    C is incorrect as the data migration with Snowball will work but using Hive on EMR
    running Spark to build a catalog is more expensive and requires more resources and addi-
    tional scripting, which can be avoided if we select option B.

    D is incorrect because of the data migration with CLI interface for Amazon S3 (which is
    expensive) and more complex (cataloging with Glue).

  5. C. A is incorrect as the solution will be expensive. Since only 30 TB of data is needed,
    loading the data into HDFS and paying the associated storage cost on the Hadoop cluster
    is unnecessary and is less likely to generate any benefits for the customer from a cost
    perspective.

    B is incorrect due to the reasons mentioned in the explanation for option A. While Redshift
    is a good solution for cloud data warehousing, maintaining all the hot and cold data on
    Redshift is not a good approach and is an anti-pattern considering the earlier discussion on
    data lakes. The solution will work but will be more expensive to maintain. While Redshift
    offers cost advantages and is 10x cheaper than traditional datawarehouse solutions, the
    customer can get better cost savings from other architectures.

    C is correct and a good solution to set up a multi-temperature warehouse. This solution
    will not only reduce the cost in the short term, it will also be a good architecture for the
    long term when the data grows. The data growth would mean that the customers could
    start using more of S3 and use Redshift Spectrum when the join between hot and cold data
    is required.

    D is incorrect. Elasticsearch is not a good tool for building a data warehouse where mul-
    tiple access paths to access the data need to be created. 30 TB data on ES would be very
    expensive, and with each index, the requirement for the storage would increase manifold.
    Amazon Redshift is a better architecture choice for building a data warehouse.

  6. D. A is incorrect because CSV is a row format. Using Athena to query CSV would mean
    reading lots of unnecessary attributes of data, which would result in lower performance
    and higher data scanning cost.

    B is incorrect as loading the data into a VARCHAR column and then querying using
    JSON extraction functions would be more expensive and the join performance would be
    very poor.

    C is incorrect as attributes’ extraction will perform better, but data stored on Redshift
    is more expensive than on S3. Furthermore, in the long run this solution will not be cost
    effective.

    342 Appendix Answers to Review Questions


    D is correct, as this uses a serverless option to convert the data into a columnar format
    like Parquet and allows for a more robust solution that would be future proof even with
    growing data sizes.

  7. C. A is incorrect as the number and types of columns should not be the first thing to look
    at. While there are certain cases where the column type may impact a query join, that
    should be looked at after the basics, which includes looking at predicates and sort keys,
    which limit the amount of data being scanned in a query.

    B is incorrect. Redshift primary key constraints are informational only.

    C is correct as sort keys act as an index and are responsible for limiting the data scanned
    from the disk.

    D is incorrect. Redshift is an MPP database, and if it’s designed properly, the number of
    rows in a table should not impact the query performance directly.

    E is incorrect. The data is in the database, and partitioning is not an option in Redshift.

  8. C. A is incorrect. AWS can capture the initial dump but the changes from databases that do
    not have incremental keys cannot be captured by AWS Glue. See the following web page:

    aws.amazon.com/blogs/database/how-to-extract-transform-and-load-data-
    for-analytic-processing-using-aws-glue-part-2/

    B is incorrect as AWS DMS can’t be used for capturing CSVs.

    C is correct as the initial dump can be done with Glue and incremental captures can be
    done with DMS for databases and AWS Glue for CSV files.

    D is incorrect as DMS cannot capture changes or the initial dump from CSV files.

  9. A. Amazon QuickSight does not allow you to make changes to the data in the reports.

  10. A. Amazon QuickSight allows you to create auto-narratives. See the following web page:

https://docs.aws.amazon.com/quicksight/latest/user/narratives-creating

.html


Chapter 2: Data Collection

  1. D. Option A does not provide any support analyzing data in real time. Option B is incorrect
    and vague. Option C involves Kinesis Firehose which helps in aggregating the data rather
    than real-time data analysis. Option D is correct as it involves Kineiss Analytics and KCL.

  2. A. Amazon Kinesis is the service that is used to ingest real-time streaming data. Data
    Pipeline is primarily for batch, Amazon SQS is for message-based communication, and
    Amazon EMR has multiple engines that are primarily batch in nature. You can use Spark
    streaming, but that is not explicitly mentioned in the options.

    Chapter 3: Data Storage 343


  3. B, C, D. Explanation: A is incorrect as AWS Glue built-in classifiers cannot be used for
    any data type. B, C, D are correct – Refer to:
    https://docs.aws.amazon.com/glue/
    latest/dg/populate-data-catalog.html.

  4. A. IoT Rules Engine gives you the ability to transform the data being received from
    IoT devices.

  5. D. AWS Database Migration Service helps you migrate databases to AWS quickly and
    securely. The source database remains fully operational during the migration, minimizing
    downtime to applications that rely on the database. The AWS Database Migration Ser-
    vice can migrate your data to and from most widely used commercial and open-source
    databases.

    https://aws.amazon.com/dms

  6. D. AWS Glue is the recommended option as it is serverless in nature and offers a
    pay-as-you-go model with no fixed costs.

  7. A. Explanation: Using the queue length will help you scale the servers based on the
    demand, and also help to scale it back down when the demand is low. Refer to the follow-
    ing link:
    https://amzn.to/3bWbDma.

  8. B. Explanation: AWS Schema conversion tool is the right choice to convert schema from
    Oracle to Amazon Aurora.

  9. A. Explanation: AWS Snowmobile is the recommended option for large petabyte scale
    data transfer.

  10. A,B. Explanation: Since this is large amounts of data, a Data Pipeline will not be a viable
    option considering you need it to be simple and fast. A data pipeline will require you to
    build a custom pipeline.

Similarly, AWS Snowball is also not a fast option as the turn around time can be quite high.
However, if you have Direct Connect and the ability to do Import/Export, that will be the
fastest way to bring the data to Amazon S3 and copy it to Amazon Redshift.


Chapter 3: Data Storage

  1. B. A is incorrect as Amazon DyamoDB is not suitable for storing Audio/Video and
    PDF files.

    C is incorrect as storing metadata in DynamoDB might be suitable for faster access however
    it is not an ideal fit for BI applications.

    D is incorrect as HDFS is not an ideal fit for storing metadata.
    B is correct as Redshift is a great tool for analysis.

    Author note: If you got the answer wrong, do not worry as you should be able to get a
    better handle on this after the Chapter 4.

    344 Appendix Answers to Review Questions


  2. B. A is incorrect as the question expects little to no coding, however Spark streaming will
    require effort to batch the data into Amazon S3.

    C is incorrect as Amazon Kinesis Data Streams is not a good fit to flush the data to
    Amazon S3.

    D is incorrect as Redshift is not a good fit for streaming use cases.

    E is incorrect as Amazon EMR itself provides streaming options like Flink and Spark
    Streaming but both require additional effort.

    B is correct as Amazon Kinesis Data Firehose provides native connectivity to Amazon S3.

  3. A. The nature of the request states that it is infrequent access and hence option A is the
    best. All other options are either more expensive or do not fit the requirements due to
    access and reliabibility needs of the request.

  4. B. DAX is a DynamoDB compatible service.

    See link: https://aws.amazon.com/dynamodb/dax

  5. B. A is incorrect as access has to be based on subscriber Id.

  6. B, D. A is incorrect as Amazon DocumentDB is not Cassandra-compatible and is in fact
    MongoDB-compatible.

    C is incorrect as Amazon DocumentDB can scale up to 64 TB.

  7. A. Amazon DocumentDB cluster can only be deployed inside a VPC.

  8. D. All of the use cases are graph use cases.


Chapter 4: Data Processing and Analysis

  1. B. AWS Glue is the simplest way to achieve data transformation using mostly point-and-
    click interface, and making use of a built-in de-duplication option using FindMatches ML
    Transform.

  2. A. Option A is correct as Map Reduce was the default processing engine one Hadoop until
    Hadoop 2.0 arrived.

    Option B is incorrect as YARN is a resource manager for applications on Hadoop.

    Option C is incorrect as Hive is a SQL layer which makes use of processing engines like
    Map Reduce, Spark, Tez, etc.

    Option D is incorrect as ZooKeeper is a distributed configuration and synchronization ser-
    vice which acts as a naming registry for large distributed systems.

  3. B. Amazon Athena is an interactive query service that makes it easy to analyze data in
    Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to man-
    age, and you pay only for the queries that you run.

    https://aws.amazon.com/athena

    Chapter 4: Data Processing and Analysis 345


  4. A, B, D. Amazon Athena supports a wide variety of data formats like CSV, TSV, JSON, or
    Textfiles and also supports open source columnar formats such as Apache ORC and Apache
    Parquet. Athena also supports compressed data in Snappy, Zlib, LZO, and GZIP formats.
    By compressing, partitioning, and using columnar formats you can improve performance
    and reduce your costs.

    https://aws.amazon.com/athena/faqs

  5. D. A – incorrect as Glue provides limited options for custom configurations.
    B – incorrect as EMR using Cluster mode is more expensive.

    C – incorrect as EMR using Step-mode is cheaper than cluster mode but more expensive
    with on-demand instances.

    D – Correct

  6. B. The EMR File System (EMRFS) is an implementation of HDFS that all Amazon EMR
    clusters use for reading and writing regular files from Amazon EMR directly to Amazon
    S3. EMRFS provides the convenience of storing persistent data in Amazon S3 for use with
    Hadoop while also providing features like consistent view and data encryption.

    https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-fs.html

  7. D. Elasticsearch provides a fast, personalized search experience for your applications,
    websites, and data lake catalogs, allowing your users to quickly find relevant data. You get
    access to all of Elasticsearch’s search APIs, supporting natural language search, auto-com-
    pletion, faceted search, and location-aware search. You can also use it to store, analyze, and
    correlate application and infrastructure log data to find and fix issues faster and improve
    application performance.

    https://aws.amazon.com/elasticsearch-service

  8. D. You can load streaming data into your Amazon Elasticsearch Service domain from
    many different sources. Some sources, like Amazon Kinesis Data Firehose and Amazon
    CloudWatch Logs, have built-in support for Amazon ES. Others, like Amazon S3, Amazon
    Kinesis Data Streams, and Amazon DynamoDB, use AWS Lambda functions as event
    handlers. The Lambda functions respond to new data by processing it and streaming it to
    your domain.

    https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/
    es-aws-integrations.html

  9. C. Please refer to the following documentation.

    https://aws.amazon.com/redshift

  10. A, D, F. Please read the section, “Redshift Architecture,” Chapter 4, “Data Processing and
    Analysis.”

https://docs.aws.amazon.com/redshift/latest/mgmt/overview.html

346 Appendix Answers to Review Questions


Chapter 5: Data Visualization

  1. D. Amazon QuickSight is the native tool provided by AWS. All others are partner products
    that can work with AWS tools but are not available directly as native AWS services.

  2. D. A correlation is basically visualizing two to three measures against each other. Combo
    charts are good for trends and categories, whereas donut charts compare values for items
    in a dimension, such as percentages to a total amount. Heat maps show a measure for the
    intersection of two dimensions. A scatter plot, however, is a visualization technique that
    is often used to understand correlation between different variables, such as salary of an
    employee vs. the employee’s grade. Both of these are measures that, when plotted against
    one another, indicate their relevance to each other.

  3. B. Please see the following documentation page: docs.aws.amazon.com/quicksight/
    latest/user/aws-directory-service.html.

  4. D. The question is looking for the simplest way to provide integration with S3, Redshift,
    and Athena. While D3.js, MicroStrategy, and Kibana can be used to visualize data from
    multiple sources, only Amazon QuickSight provides native integration to S3, Redshift, and
    Athena with forecasting capabilities.

  5. C. Amazon QuickSight, EMR Notebooks, and Tableau can be used for reporting, and
    Tableau and QuickSight specifically can be used for building dashboards. However,

real-time reporting at ultra-low latency with data from Elasticsearch is best provided using
Kibana as part of the ELK stack.


Chapter 6: Data Security

  1. D. Sharing resources with other accounts is done using cross-account access. Option A is
    incorrect as the supplier already has an account in place. The answer is vague as it does
    not specify where to create an IAM group and is missing the details. Option B is incorrect
    as there is no such thing as a cross-account role. Option C is incorrect. Please see details
    captured here:
    https://aws.amazon.com/premiumsupport/knowledge-center/
    cross-account-access-s3

  2. C. By default, Amazon Redshift is locked down for access. In order for you to provide
    access to the cluster, you need to configure the security groups for the cluster with proper
    access paths.

  3. B. Whenever you come across such a question, it is important to understand that an IAM
    role is generally the preferred way vs. creating an IAM user. Furthermore, using STS (AWS
    Security Token Service), which offers web identity federation, can help your federation
    application from identity providers like Facebook, Amazon, and Google. This not only
    allows fine-grained access control but also provides a much tighter control.

    Chapter 6: Data Security 347

  4. B. See the following link for more information: aws.amazon.com/about-aws/whats-new/
    2019/11/amazon-kinesis-data-firehose-adds-support-for-customer-
    provided-keys-for-server-side-encryption.

  5. D. See the following link for more information: docs.aws.amazon.com/athena/
    latest/ug/security-iam-athena.html.


AWS has more than a million customers in 190 countries across the globe.

To serve these customers, AWS maintains 24 regions spanning five
continents. Within each region there are availability zones. An AZ consists of
one to six data centers, with redundant power supplies and networking
connectivity.

AWS follows the model of shared security, which means AWS is
responsible for the security of the cloud and customers are responsible for
security in the cloud.

AWS has been continually expanding its services to support virtually any
cloud workload and now has more than 175 services that include compute,
storage, networking, database, analytics, application services, deployment,
management, and mobile services.


Questions

  1. If you want to run your relational database in the AWS cloud, which
    service would you choose?

    1. Amazon DynamoDB

    2. Amazon Redshift

    3. Amazon RDS

    4. Amazon ElastiCache

  2. If you want to speed up the distribution of your static and dynamic web
    content such as HTML, CSS, image, and PHP files, which service
    would you consider?

    1. Amazon S3

    2. Amazon EC2

    3. Amazon Glacier

    4. Amazon CloudFront

  3. What is a way of connecting your data center with AWS?

    1. AWS Direct Connect

    2. Optical fiber

    3. Using an Infiniband cable

    4. Using a popular Internet service from a vendor such as Comcast or
      AT&T

  4. What is each unique location in the world where AWS has a cluster of
    data centers called?

    1. Region

    2. Availability zone

    3. Point of presence

    4. Content delivery network

  5. You want to deploy your applications in AWS, but you don’t want to
    host them on any servers. Which service would you choose for doing
    this? (Choose two.)

    1. Amazon ElastiCache

    2. AWS Lambda

    3. Amazon API Gateway

    4. Amazon EC2

  6. You want to be notified for any failure happening in the cloud. Which
    service would you leverage for receiving the notifications?

    1. Amazon SNS

    2. Amazon SQS

    3. Amazon CloudWatch

    4. AWS Config

  7. How can you get visibility of user activity by recording the API calls
    made to your account?

    1. By using Amazon API Gateway

    2. By using Amazon CloudWatch

    3. By using AWS CloudTrail

    4. By using Amazon Inspector

  8. You have been tasked with moving petabytes of data to the AWS cloud.
    What is the most efficient way of doing this?

    1. Upload them to Amazon S3

    2. Use AWS Snowball

    3. Use AWS Server Migration Service

    4. Use AWS Database Migration Service

  9. How do you integrate AWS with the directories running on-premise in
    your organization?

    1. By using AWS Direct Connect

    2. By using a VPN

    3. By using AWS Directory Service

    4. Directly via the Internet

  10. How can you have a shared file system across multiple Amazon EC2
    instances?

    1. By using Amazon S3

    2. By mounting Elastic Block Storage across multiple Amazon EC2
      servers

    3. By using Amazon EFS

    4. By using Amazon Glacier

Answers

  1. C. Amazon DynamoDB is a NoSQL offering, Amazon Redshift is a data
    warehouse offering, and Amazon ElastiCache is used to deploy Redis or
    Memcached protocol–compliant server nodes in the cloud.

  2. D. Amazon S3 can be used to store objects; it can’t speed up the
    operations. Amazon EC2 provides the compute. Amazon Glacier is the
    archive storage.

  3. A. Your colocation or MPLS provider may use an optical fiber or
    Infiniband cable behind the scenes. If you want to connect over the
    Internet, then you need a VPN.

  4. A. AZs are inside a region, so they are not unique. POP and content
    delivery both serve the purpose of speeding up distribution.

  5. B, C. Amazon ElastiCache is used to deploy Redis or Memcached
    protocol–compliant server nodes in the cloud, and Amazon EC2 is a
    server.

  6. A. Amazon SQS is the queue service; Amazon CloudWatch is used to
    monitor cloud resources; and AWS Config is used to assess, audit, and
    evaluate the configurations of your AWS resources.

  7. C. Amazon API Gateway is a fully managed service that makes it easy
    for developers to create, publish, maintain, monitor, and secure APIs at
    any scale. Amazon CloudWatch is used to monitor cloud resources.
    AWS Config is used to assess, audit, and evaluate the configurations of
    your AWS resources, and Amazon Inspector is an automated security
    assessment service that helps improve the security and compliance of
    applications deployed on AWS.

  8. B. You can also upload data to Amazon S3, but if you have petabytes of
    data and want to upload it to Amazon S3, it is going to take a lot of time.
    The quickest way would be to leverage AWS Snowball. AWS Server
    Migration Service is an agentless service that helps coordinate,
    automate, schedule, and track large-scale server migrations, whereas
    AWS Database Migration Service is used to migrate the data of the
    relational database or data warehouse.

  9. C. AWS Direct Connect and a VPN are used to connect your corporate
    data center with AWS. You cannot use the Internet directly to integrate

    directories; you need a service to integrate your on-premise directory to
    AWS.

  10. C. Amazon S3 is an object store, Amazon EBS can’t be mounted across
    multiple servers, and Amazon Glacier is an extension of Amazon S3.

image




Questions

  1. What is the main purpose of Amazon S3 Glacier? (Choose all that
    apply.)

    1. Storing hot, frequently used data

    2. Storing archival data

    3. Storing historical or infrequently accessed data

    4. Storing the static content of a web site

    5. Creating a cross-region replication bucket for Amazon S3

  2. What is the best way to protect a file in Amazon S3 against accidental
    delete?

    1. Upload the files in multiple buckets so that you can restore from
      another when a file is deleted

    2. Back up the files regularly to a different bucket or in a different
      region

    3. Enable versioning on the S3 bucket

    4. Use MFA for deletion

    5. Use cross-region replication

  3. Amazon S3 provides 99.999999999 percent durability. Which of the
    following are true statements? (Choose all that apply.)

    1. The data is mirrored across multiple AZs within a region.

    2. The data is mirrored across multiple regions to provide the
      durability SLA.

    3. The data in Amazon S3 Standard is designed to handle the
      concurrent loss of two facilities.

    4. The data is regularly backed up to AWS Snowball to provide the
      durability SLA.

    5. The data is automatically mirrored to Amazon S3 Glacier to
      achieve high availability.

  4. To set up a cross-region replication, what statements are true? (Choose
    all that apply.)

    1. The source and target bucket should be in a same region.

    2. The source and target bucket should be in different region.

    3. You must choose different storage classes across different regions.

    4. You need to enable versioning and must have an IAM policy in
      place to replicate.

    5. You must have at least ten files in a bucket.

  5. You want to move all the files older than a month to S3 IA. What is the
    best way of doing this?

    1. Copy all the files using the S3 copy command

    2. Set up a lifecycle rule to move all the files to S3 IA after a month

    3. Download the files after a month and re-upload them to another S3
      bucket with IA

    4. Copy all the files to Amazon S3 Glacier and from Amazon S3
      Glacier copy them to S3 IA

  6. What are the various way you can control access to the data stored in
    S3? (Choose all that apply.)

    1. By using an IAM policy

    2. By creating ACLs

    3. By encrypting the files in a bucket

    4. By making all the files public

    5. By creating a separate folder for the secure files

  7. How much data can you store on S3?

    1. 1 petabyte per account

    2. 1 exabyte per account

    3. 1 petabyte per region

    4. 1 exabyte per region

    5. Unlimited

  8. What are the different storage classes that Amazon S3 offers? (Choose
    all that apply.)

    1. S3 Standard

    2. S3 Global

    3. S3 CloudFront

    4. S3 US East

    5. S3 IA

  9. What is the best way to delete multiple objects from S3?

    1. Delete the files manually using a console

    2. Use multi-object delete

    3. Create a policy to delete multiple files

    4. Delete all the S3 buckets to delete the files

  10. What is the best way to get better performance for storing several files
    in S3?

    1. Create a separate folder for each file

    2. Create separate buckets in different regions

    3. Use a partitioning strategy for storing the files

    4. Use the formula of keeping a maximum of 100 files in the same
      bucket

  11. The data across the EBS volume is mirrored across which of the
    following?

    1. Multiple AZs

    2. Multiple regions

    3. The same AZ

    4. EFS volumes mounted to EC2 instances

  12. I shut down my EC2 instance, and when I started it, I lost all my data.
    What could be the reason for this?

    1. The data was stored in the local instance store.

    2. The data was stored in EBS but was not backed up to S3.

    3. I used an HDD-backed EBS volume instead of an SSD-backed
      EBS volume.

    4. I forgot to take a snapshot of the instance store.

  13. I am running an Oracle database that is very I/O intense. My database
    administrator needs a minimum of 3,600 IOPS. If my system is not able
    to meet that number, my application won’t perform optimally. How can I
    make sure my application always performs optimally?

    1. Use Elastic File System since it automatically handles the
      performance

    2. Use Provisioned IOPS SSD to meet the IOPS number

    3. Use your database files in an SSD-based EBS volume and your
      other files in an HDD-based EBS volume

    4. Use a general-purpose SSD under a terabyte that has a burst
      capability

  14. Your application needs a shared file system that can be accessed from
    multiple EC2 instances across different AZs. How would you provision
    it?

    1. Mount the EBS volume across multiple EC2 instances

    2. Use an EFS instance and mount the EFS across multiple EC2
      instances across multiple AZs

    3. Access S3 from multiple EC2 instances

    4. Use EBS with Provisioned IOPS

  15. You want to run a MapReduce job (a part of the big data workload) for
    a noncritical task. Your main goal is to process it in the most cost-
    effective way. The task is throughput sensitive but not at all mission
    critical and can take a longer time. Which type of storage would you
    choose?

    1. Throughput Optimized HDD (st1)

    2. Cold HDD (sc1)

    3. General-Purpose SSD (gp2)

    4. Provisioned IOPS (io1)


Answers

  1. B, C. Hot and frequently used data needs to be stored in Amazon S3;
    you can also use Amazon CloudFront to cache the frequently used data.
    Amazon S3 Glacier is used to store the archive copies of the data or
    historical data or infrequent data. You can make lifecycle rules to move
    all the infrequently accessed data to Amazon S3 Glacier. The static
    content of the web site can be stored in Amazon CloudFront in
    conjunction with Amazon S3. You can’t use Amazon S3 Glacier for a
    cross-region replication bucket of Amazon S3; however, you can use S3
    IA or S3 RRS in addition to S3 Standard as a replication bucket for
    CRR.

  2. C. You can definitely upload the file in multiple buckets, but the cost
    will increase the number of times you are going to store the files. Also,
    now you need to manage three or four times more files. What about
    mapping files to applications? This does not make sense. Backing up
    files regularly to a different bucket can help you to restore the file to
    some extent. What if you have uploaded a new file just after taking the
    backup? The correct answer is versioning since enabling versioning
    maintains all the versions of the file and you can restore from any
    version even if you have deleted the file. You can definitely use MFA
    for delete, but what if even with MFA you delete a wrong file? With
    CRR, if a DELETE request is made without specifying an object
    version ID, Amazon S3 adds a delete marker, which cross-region

    replication replicates to the destination bucket. If a DELETE request
    specifies a particular object version ID to delete, Amazon S3 deletes
    that object version in the source bucket, but it does not replicate the
    deletion in the destination bucket.

  3. A, C. By default the data never leaves a region. If you have created an
    S3 bucket in a global region, it will always stay there unless you
    manually move the data to a different region. Amazon does not back up
    data residing in S3 to anywhere else since the data is automatically
    mirrored across multiple facilities. However, customers can replicate
    the data to a different region for additional safety. AWS Snowball is
    used to migrate on-premises data to S3. Amazon S3 Glacier is the
    archival storage of S3, and an automatic mirror of regular Amazon S3
    data does not make sense. However, you can write lifecycle rules to
    move historical data from Amazon S3 to Amazon S3 Glacier.

  4. B, D. Cross-region replication can’t be used to replicate the objects in
    the same region. However, you can use the S3 copy command or copy
    the files from the console to move the objects from one bucket to another
    in the same region. You can choose a different class of storage for CRR;
    however, this option is not mandatory, and you can use the same class of
    storage as the source bucket as well. There is no minimum number of
    file restriction to enable cross-region replication. You can even use
    CRR when there is only one file in an Amazon S3 bucket.

  5. B. Copying all the files using the S3 copy command is going to be a
    painful activity if you have millions of objects. Doing this when you can
    do the same thing by automatically downloading and re-uploading the
    files does not make any sense and wastes a lot of bandwidth and
    manpower. Amazon S3 Glacier is used mainly for archival storage. You
    should not copy anything into Amazon S3 Glacier unless you want to
    archive the files.

  6. A, B. By encrypting the files in the bucket, you can make them secure,
    but it does not help in controlling the access. By making the files public,
    you are providing universal access to everyone. Creating a separate
    folder for secure files won’t help because, again, you need to control the
    access of the separate folder.

  7. E. Since the capacity of S3 is unlimited, you can store as much data as
    you want there.

  8. A, E. S3 Global is a region and not a storage class. Amazon CloudFront
    is a CDN and not a storage class. US East is a region and not a storage
    class.

  9. B. Manually deleting the files from the console is going to take a lot of
    time. You can’t create a policy to delete multiple files. Deleting buckets
    in order to delete files is not a recommended option. What if you need
    some files from the bucket?

  10. C. Creating a separate folder does not improve performance. What if
    you need to store millions of files in these separate folders? Similarly,
    creating separate folders in a different region does not improve the
    performance. There is no such rule of storing 100 files per bucket.

  11. C. Data stored in Amazon EBS volumes is redundantly stored in
    multiple physical locations in the same AZ. Amazon EBS replication is
    stored within the same availability zone, not across multiple zones.

  12. A. The only possible reason is that the data was stored in a local
    instance store that is not persisted once the server is shut down. If the
    data stays in EBS, then it does not matter if you have taken the backup or
    not; the data will always persist. Similarly, it does not matter if it is an
    HDD- or SSD-backed EBS volume. You can’t take a snapshot of the
    instance store.

  13. B. If your workload needs a certain number of workloads, the best way
    would be is to use a Provisioned IOPS. That way, you can ensure the
    application or the workload always meets the performance metric you
    are looking for.

  14. B. Use an EFS. The same EBS volume can’t be mounted across multiple
    EC2 instances.

  15. B. Since the workload is not critical and you want to process it in the
    most cost-effective way, you should choose Cold HDD. Though the
    workload is throughput sensitive, it is not critical and is low priority;
    therefore, you should not choose st1. gp2 and io1 are more expensive
    than other options like st1.

    and convenience and to help you get started with AWS. You can connect to a
    VPC from your corporate data center in one of four ways:



Questions

  1. You have created a VPC with two subnets. The web servers are running
    in a public subnet, and the database server is running in a private subnet.
    You need to download an operating system patch to update the database
    server. How you are going to download the patch?

    1. By attaching the Internet gateway to the private subnet temporarily

    2. By using a NAT gateway

    3. By using peering to another VPC

    4. By changing the security group of the database server and allowing
      Internet access

  2. What is the maximum size of the CIDR block you can have for a VPC?

    1. 16

    2. 32

    3. 28

    4. 10

  3. How many IP addresses are reserved by AWS for internal purposes in a
    CIDR block that you can’t use?

    1. 5

    2. 2

    3. 3

    4. 4

  4. You have a web server and an app server running. You often reboot your
    app server for maintenance activities. Every time you reboot the app

    server, you need to update the connect string for the web server since
    the IP address of the app server changes. How do you fix this issue?

    1. Allocate an IPv6 IP address to the app server

    2. Allocate an Elastic Network Interface to the app server

    3. Allocate an elastic IP address to the app server

    4. Run a script to change the connection

  5. To connect your corporate data center to AWS, you need at least which
    of the following components? (Choose two.)

    1. Internet gateway

    2. Virtual private gateway

    3. NAT gateway

    4. Customer gateway

  6. You want to explicitly “deny” certain traffic to the instance running in
    your VPC. How do you achieve this?

    1. By using a security group

    2. By adding an entry in the route table

    3. By putting the instance in the private subnet

    4. By using a network access control list

  7. You have created a web server in the public subnet, and now anyone can
    access the web server from the Internet. You want to change this
    behavior and just have the load balancer talk with the web server and
    no one else. How do you achieve this?

    1. By removing the Internet gateway

    2. By adding the load balancer in the route table

    3. By allowing the load balancer access in the NACL of the public
      subnet

    4. By modifying the security group of the instance and just having the
      load balancer talk with the web server

  8. How can your VPC talk with DynamoDB directly?

    1. By using a direct connection

    2. By using a VPN connection

    3. By using a VPN endpoint

    4. By using an instance in the public subnet

  9. The local route table in the VPC allows which of the following?

    1. So that all the instances running in different subnets within a VPC
      can communicate to each other

    2. So that only the traffic to the Internet can be routed

    3. So that multiple VPCs can talk with each other

    4. So that an instance can use the local route and talk to the Internet

  10. What happens to the EIP address when you stop and start an instance?

    1. The EIP is released to the pool and you need to re-attach.

    2. The EIP is released temporarily during the stop and start.

    3. The EIP remains associated with the instance.

    4. The EIP is available for any other customer.


Answers

  1. B. The database server is running in a private subnet. Anything running
    in a private subnet should never face the Internet directly. Even if you
    peer to another VPC, you can’t really connect to the Internet without
    using a NAT instance or a NAT gateway. Even if you change the security
    group of the database server and allow all incoming traffic, it still won’t
    be able to connect to the Internet because the database server is running
    in the private subnet and the private subnet is not attached to the Internet
    gateway.

  2. A. The maximum size of a VPC you can have is /16, which corresponds
    to 65,536 IP addresses.

  3. A. AWS reserves five IP addresses for internal purposes, the first four
    and the last one.

  4. C. Allocating an IPv6 IP address won’t be of any use because whenever
    the server comes back, it is going to get assigned another new IPv6 IP
    address. Also, if your VPC doesn’t support IPv6 and if you did not

    select the IPv6 option while creating the instance, you may not be able
    to allocate one. The Elastic Network Interface helps you add multiple
    network interfaces but won’t get you a static IP address. You can run a
    script to change the connection, but unfortunately you have to run it
    every time you are done with any maintenance activities. You can even
    automate the running of the script, but why add so much complexity
    when you can solve the problem simply by allocating an EIP?

  5. A, C. To connect to AWS from your data center, you need a customer
    gateway, which is the customer side of a connection, and a virtual
    private gateway, which is the AWS side of the connection. An Internet
    gateway is used to connect a VPC with the Internet, whereas a NAT
    gateway connects to the servers running in the private subnet in order to
    connect to the Internet.

  6. D. By using a security group, you can allow and disallow certain traffic,
    but you can’t explicitly deny traffic since the deny option does not exist
    for security groups. There is no option for denying particular traffic via
    a route table. By putting an instance in the private subnet, you are just
    removing the Internet accessibility of this instance, which is not going to
    deny any particular traffic.

  7. D. By removing the Internet gateway, a web connection via the load
    balancer won’t be able to reach the instance. You can add the route for a
    load balancer in the route table. NACL can allow or block certain
    traffic. In this scenario, you won’t be able to use NACL.

  8. C. Direct Connect and VPN are used to connect your corporate data
    center to AWS. DynamoDB is a service running in AWS. Even if you
    use an instance in a public subnet to connect with DynamoDB, it is still
    going to use the Internet. In this case, you won’t be able to connect to
    DynamoDB, bypassing the Internet.

  9. A. The traffic to the Internet is routed via the Internet gateway. Multiple
    VPCs can talk to each other via VPC peering.

  10. C. Even during the stop and start of the instance, the EIP is associated
    with the instance. It gets detached when you explicitly terminate an
    instance.

Questions

  1. You know that you need 24 CPUs for your production server. You also
    know that your compute capacity is going to remain fixed until next year,
    so you need to keep the production server up and running during that
    time. What pricing option would you go with?

    1. Choose the spot instance

    2. Choose the on-demand instance

    3. Choose the three-year reserved instance

    4. Choose the one-year reserved instance

  2. You are planning to run a database on an EC2 instance. You know that
    the database is pretty heavy on I/O. The DBA told you that you would
    need a minimum of 8,000 IOPS. What is the storage option you should
    choose?

    1. EBS volume with magnetic hard drive

    2. Store all the data files in the ephemeral storage of the server

    3. EBS volume with provisioned IOPS

    4. EBS volume with general-purpose SSD

  3. You are running your application on a bunch of on-demand servers. On
    weekends you have to kick off a large batch job, and you are planning to
    add capacity. The batch job you are going to run over the weekend can
    be restarted if it fails. What is the best way to secure additional compute
    resources?

    1. Use the spot instance to add compute for the weekend

    2. Use the on-demand instance to add compute for the weekend

    3. Use the on-demand instance plus PIOPS storage for the weekend
      resource

    4. Use the on-demand instance plus a general-purpose EBS volume
      for the weekend resource

  4. You have a compliance requirement that you should own the entire
    physical hardware and no other customer should run any other instance
    on the physical hardware. What option should you choose?

    1. Put the hardware inside the VPC so that no other customer can use
      it

    2. Use a dedicated instance

    3. Reserve the EC2 for one year

    4. Reserve the EC2 for three years

  5. You have created an instance in EC2, and you want to connect to it.
    What should you do to log in to the system for the first time?

    1. Use the username/password combination to log in to the server

    2. Use the key pair combination (private and public keys)

    3. Use your cell phone to get a text message for secure login

    4. Log in via the root user

  6. What are the characteristics of AMI that are backed up by the instance
    store? (Choose two.)

    1. The data persists even after the instance reboot.

    2. The data is lost when the instance is shut down.

    3. The data persists when the instance is shut down.

    4. The data persists when the instance is terminated.

  7. How can you make a cluster of an EC2 instance?

    1. By creating all the instances within a VPC

    2. By creating all the instances in a public subnet

    3. By creating all the instances in a private subnet

    4. By creating a placement group

  8. You need to take a snapshot of the EBS volume. How long will the EBS
    remain unavailable?

    1. The volume will be available immediately.

    2. EBS magnetic drive will take more time than SSD volumes.

    3. It depends on the size of the EBS volume.

    4. It depends on the actual data stored in the EBS volume.

  9. What are the different ways of making an EC2 server available to the
    public?

    1. Create it inside a public subnet

    2. Create it inside a private subnet and assign a NAT device

    3. Attach an IPv6 IP address

    4. Allocate that with a load balancer and expose the load balancer to
      the public

  10. The application workload changes constantly, and to meet that, you keep
    on changing the hardware type for the application server. Because of
    this, you constantly need to update the web server with the new IP
    address. How can you fix this problem?

    1. Add a load balancer

    2. Add an IPv6 IP address

    3. Add an EIP to it

    4. Use a reserved EC2 instance


Answers

  1. D. You won’t choose a spot instance because the spot instance can be
    taken away at any time by giving notice. On-demand won’t give you the
    best pricing since you know you will be running the server all the time
    for the next year. Since you know the computation requirement is only
    for one year, you should not go with a three-year reserved instance.
    Rather, you should go for a one-year reserved instance to get the
    maximum benefit.

  2. C. The magnetic hard drive won’t give you the IOPS number you are
    looking for. You should not put the data files in the ephemeral drives
    because as soon as the server goes down, you will lose all the data. For
    a database, data is the most critical component, and you can’t afford to
    lose that. The provisioned IOPS will give you the desired IOPS that
    your database needs. You can also run the database with general-
    purpose SSD, but there is no guarantee that you will always get the
    8,000 IOPS number that you are looking for. Only PIOPS will provide
    you with that capacity.

  3. A. Since you know the workload can be restarted from where it fails,
    the spot instance is going to provide you with the additional compute

    and pricing benefit as well. You can go with on-demand as well; the
    only thing is you have to pay a little bit more for on-demand than for the
    spot instance. You can choose a PIOPS or GP2 with the on-demand
    instance. If you choose PIOPS, you have to pay much more compared to
    all the other options.

  4. B. You can create the instance inside a VPC, but that does not mean
    other customers can’t create any other instance in the physical hardware.
    Creating a dedicated instance is going to provide exactly what you are
    looking for. Reserving the EC2 instance for the instance for one or three
    years won’t help unless you reserve it as a dedicated instance.

  5. B. The first time you log in to an EC2 instance, you need the
    combination of the private and public keys. You won’t be able to log in
    using a username and password or as a root user unless you have used
    the keys. You won’t be able to use multifactor authentication until you
    configure it.

  6. A, B. If an AMI is backed up by an instance store, you lose all the data
    if the instance is shut down or terminated. However, the data persists if
    the instance is rebooted.

  7. D. You can create the placement group within the VPC or within the
    private or public subnet.

  8. A. The volumes are available irrespective of the time it takes to take the
    snapshot.

  9. A. If you create an EC2 instance in the public subnet, it is available
    from the Internet. Creating an instance inside a private subnet and
    attaching a NAT instance won’t give access from the Internet. Attaching
    an IPv6 address can provide Internet accessibility provided it is a
    public IPv6 and not private. Giving load balance access to the public
    won’t give the EC2 access to the public.

  10. C. Even if you reserve the instance, you still need to remap the IP
    address. Even with IPv6 you need to remap the IP addresses. The load
    balancer won’t help because the load balancer also needs to be
    remapped with the new IP addresses.


    CHAPTER 5


    Identity and Access Management and
    Security on AWS

    In this chapter, you will



AWS Identity and Access Management (IAM) allows you to control
individual (users) and group access to all the AWS resources in a secured
way. Using IAM, you can define what each user can access in the AWS
cloud. For example, you can specify which users have administrator access,
which users have read-only access, which users can access certain AWS
services, and so on. Using the IAM service, you can choose the services that
specific users are going to use and what kind of privileges users should have.
In a nutshell, you control both authentication and authorization on the AWS
resources through identity and access management, which means IAM is
about whether you are really who you say you are as well as what you are
authorized or allowed to do. In addition, you can audit and log the users,
continuously monitor them, and review account activity.


Authentication

Authentication is about making sure you are who you say you are. IAM offers
the following authentication features:

  1. Service name

  2. AZ

  1. What happens if you delete an IAM role that is associated with a
    running EC2 instance?

    1. Any application running on the instance that is using the role will
      be denied access immediately.

    2. The application continues to use that role until the EC2 server is
      shut down.

    3. The application will have the access until the session is alive.

    4. The application will continue to have access.

  2. For implementing security features, which of the following would you
    choose?

    1. Username/password

    2. MFA

    3. Using multiple S3 buckets

    4. Login using the root user

  3. Which is based on temporary security tokens? (Choose two.)

    1. Amazon EC2 roles

    2. Federation

    3. Username and password

    4. Using AWS STS

  4. You want EC2 instances to give access without any username or
    password to S3 buckets. What is the easiest way of doing this?

    1. By using a VPC S3 endpoint

    2. By using a signed URL

    3. By using roles

    4. By sharing the keys between S3 and EC2

  5. An IAM policy takes which form?

    1. Python script

    2. Written in C language

    3. JSON code

    4. XML code

  6. If an administrator who has root access leaves the company, what
    should you do to protect your account? (Choose two.)

    1. Add MFA to root

    2. Delete all the IAM accounts

    3. Change the passwords for all the IAM accounts and rotate keys

    4. Delete all the EC2 instances created by the administrator

  7. Using the shared security model, the customer is responsible for which
    of the following? (Choose two.)

    1. The security of the data running inside the database hosted in EC2

    2. Maintaining the physical security of the data center

    3. Making sure the hypervisor is patched correctly

    4. Making sure the operating system is patched correctly

  8. In Amazon RDS, who is responsible for patching the database?

    1. Customer.

    2. Amazon.

    3. In RDS you don’t have to patch the database.

    4. RDS does not come under the shared security model.


Answers

  1. B. No, you can’t add an IAM role to an IAM group.

  2. B, C. A policy is not location specific and is not limited to a user.

  3. A. The application will be denied access.

  4. A, B. Using multiple buckets won’t help in terms of security. Similarly,
    leveraging multiple regions won’t help to address the security.

  5. B, D. The username and password is not a temporary security token.

  6. C. A VPC endpoint is going to create a path between the EC2 instance
    and the Amazon S3 bucket. A signed URL won’t help EC2 instances

    from accessing S3 buckets. You cannot share the keys between S3 and
    EC2.

  7. C. It is written in JSON.

  8. A, C. Deleting all the IAM accounts is going to be a bigger painful task.
    You are going to lose all the users. Similarly, you can’t delete all the
    EC2 instances; they must be running some critical application or
    something meaningful.

  9. A, D. The customer is responsible for the security of anything running on
    the hypervisor, and therefore the operating system and the security of
    data are the customer’s responsibility.

  10. B. RDS does come under a shared security model. Since it is a managed
    service, the patching of the database is taken care of by Amazon.

  1. Where do you define the details of the type of servers to be launched
    when launching the servers using Auto Scaling?

    1. Auto Scaling group

    2. Launch configuration

    3. Elastic Load Balancer

    4. Application load balancer

  2. What happens when the Elastic Load Balancing fails the health check?
    (Choose the best answer.)

    1. The Elastic Load Balancing fails over to a different load balancer.

    2. The Elastic Load Balancing keeps on trying until the instance
      comes back online.

    3. The Elastic Load Balancing cuts off the traffic to that instance and
      starts a new instance.

    4. The load balancer starts a bigger instance.

  3. When you create an Auto Scaling mechanism for a server, which two
    things are mandatory? (Choose two.)

    1. Elastic Load Balancing

    2. Auto Scaling group

    3. DNS resolution

    4. Launch configuration

  4. You have configured a rule that whenever the CPU utilization of your
    EC2 goes up, Auto Scaling is going to start a new server for you. Which
    tool is Auto Scaling using to monitor the CPU utilization?

    1. CloudWatch metrics.

    2. Output of the top command.

    3. The ELB health check metric.

    4. It depends on the operating system. Auto Scaling uses the OS-
      native tool to capture the CPU utilization.

  5. The listener within a load balancer needs two details in order to listen
    to incoming traffic. What are they? (Choose two.)

    1. Type of operating system

    2. Port number

    3. Protocol

    4. IP address

  6. Which load balancer is not capable of doing the health check?

    1. Application load balancer

    2. Network load balancer

    3. Classic load balancer

    4. None of the above

  7. If you want your request to go to the same instance to get the benefits of
    caching the content, what technology can help provide that objective?

    1. Sticky session

    2. Using multiple AZs

    3. Cross-zone load balancing

    4. Using one ELB per instance

  8. You are architecting an internal-only application. How can you make
    sure the ELB does not have any Internet access?

    1. You detach the Internet gateway from the ELB.

    2. You create the instances in the private subnet and hook up the ELB
      with that.

    3. The VPC should not have any Internet gateway attached.

    4. When you create the ELB from the console, you can define whether
      it is internal or external.

  9. Which of the following is a true statement? (Choose two.)

    1. ELB can distribute traffic across multiple regions.

    2. ELB can distribute across multiple AZs but not across multiple
      regions.

    3. ELB can distribute across multiple AZs.

    4. ELB can distribute traffic across multiple regions but not across
      multiple AZs.

  10. How many EC2 instances can you have in an Auto Scaling group?

A. 10.

B. 20.

C. 100.

D. There is no limit to the number of EC2 instances you can have in
the Auto Scaling group.


Answers

  1. B. You define the type of servers to be launched in the launch
    configuration. The Auto Scaling group is used to define the scaling
    policies, Elastic Load Balancing is used to distribute the traffic across
    multiple instances, and the application load balancer is used to
    distribute the HTTP/HTTS traffic at OSI layer 7.

  2. C. When Elastic Load Balancing fails over, it is an internal

    mechanism that is transparent to end users. Elastic Load Balancing
    keeps on trying, but if the instance does not come back online, it
    starts a new instance. It does not wait
    indefinitely for that instance to
    come back online. The load balancer starts the new instance, which is
    defined in the launch configuration. It is going to start the same type of
    instance unless you have manually changed the launch configuration to
    start a bigger type of instance.

  3. B, D. The launch configuration and the Auto Scaling group are
    mandatory.

  4. A. Auto Scaling relies on the CloudWatch metrics to find the CPU
    utilization. Using the top command or the native OS tools, you should be
    able to identify the CPU utilization, but Auto Scaling does not use that.

  5. B, C. Listeners define the protocol and port on which the load
    balancer listens for incoming connections.

  6. D. All the load balancers are capable of doing a health check.

  7. A. Using multiple AZs, you can distribute your load across multiple
    AZs, but you can’t direct the request to go to the same instance. Cross-
    zone load balancing is used to bypass caching. Using one ELB per
    instance is going to complicate things.

  8. D. You can’t attach or detach an Internet gateway with ELB, even if you
    create the instances in a private subnet; and if you create an external-
    facing ELB instance, it will have Internet connectivity. The same applies
    for VPC; even if you take an IG out of the VPC but create ELB as
    external facing, it will still have Internet connectivity.

  9. B, C. ELB can span multiple AZs within a region. It cannot span
    multiple regions.

  10. D. There is no limit to the number of EC2 instances you can have in the
    Auto Scaling group. However, there might an EC2 limitation in your
    account that can be increased by logging a support ticket.

    apply policies to those groups. Organizations enables you to centrally
    manage policies across multiple accounts, without requiring custom scripts
    and manual processes.


    Questions

    1. What are the languages that AWS Lambda supports? (Choose two.)

      1. Perl

      2. Ruby

      3. Java

      4. Python

    2. Which product is not a good fit if you want to run a job for ten hours?

      1. AWS Batch

      2. EC2

      3. Elastic Beanstalk

      4. Lambda

    3. What product should you use if you want to process a lot of streaming
      data?

      1. Kinesis Firehouse

      2. Kinesis Data Stream

      3. Kinesis Data Analytics

      4. API Gateway

    4. Which product should you choose if you want to have a solution for
      versioning your APIs without having the pain of managing the
      infrastructure?

      1. Install a version control system on EC2 servers

      2. Use Elastic Beanstalk

      3. Use API Gateway

      4. Use Kinesis Data Firehose

    5. You want to transform the data while it is coming in. What is the easiest
      way of doing this?

      1. Use Kinesis Data Analytics

      2. Spin off an EMR cluster while the data is coming in

      3. Install Hadoop on EC2 servers to do the processing

      4. Transform the data in S3

    6. Which product is not serverless?

      1. Redshift

      2. DynamoDB

      3. S3

      4. AWS Lambda

    7. You have the requirement to ingest the data in real time. What product
      should you choose?

      1. Upload the data directly to S3

      2. Use S3 IA

      3. Use S3 reduced redundancy

      4. Use Kinesis Data Streams

    8. You have a huge amount of data to be ingested. You don’t have a very
      stringent SLA for it. Which product should you use?

      1. Kinesis Data Streams

      2. Kinesis Data Firehose

      3. Kinesis Data Analytics

      4. S3

    9. What is the best way to manage RESTful APIs?

      1. API Gateway

      2. EC2 servers

      3. Lambda

      4. AWS Batch

    10. To execute code in AWS Lambda, what is the size of the EC2 instance
      you need to provision in the back end?

      1. For code running less than one minute, use a T2 Micro.

      2. For code running between one minute and three minutes, use M2.

      3. For code running between three minutes and five minutes, use M2
        large.

      4. There is no need to provision an EC2 instance on the back end.

    11. What are the two configuration management services that AWS
      OpsWorks supports? (Choose two.)

      1. Chef

      2. Ansible

      3. Puppet

      4. Java

    12. You are designing an e-commerce order management web site where
      your users can order different types of goods. You want to decouple the
      architecture and would like to separate the ordering process from
      shipping. Depending on the shipping priority, you want to have a
      separate queue running for standard shipping versus priority shipping.
      Which AWS service would you consider for this?

      1. AWS CloudWatch

      2. AWS CloudWatch Events

      3. AWS API Gateway

      4. AWS SQS

    13. Your company has more than 20 business units, and each business unit
      has its own account in AWS. Which AWS service would you choose to
      manage the billing across all the different AWS accounts?

      1. AWS Organizations

      2. AWS Trusted Advisor

      3. AWS Cost Advisor

      4. AWS Billing Console

    14. You are running a job in an EMR cluster, and the job is running for a
      long period of time. You want to add additional horsepower to your
      cluster, and at the same time you want to make sure it is cost-effective.
      What is the best way of solving this problem?

      1. Add more on-demand EC2 instances for your task node

      2. Add more on-demand EC2 instances for your core node

      3. Add more spot instances for your task node

      4. Add more reserved instances for your task node

    15. Your resources were running fine in AWS, and all of a sudden you
      notice that something has changed. Your cloud security team told you
      that some API has changed the state of your resources that were running
      fine earlier. How do you track who has created the mistake?

      1. By writing a Lambda function, you can find who has changed what

      2. By using AWS CloudTrail

      3. By using Amazon CloudWatch Events

      4. By using AWS Trusted Advisor

    16. You are running a mission-critical three-tier application on AWS and
      have enabled Amazon CloudWatch metrics for a one-minute data point.
      How far back you can go and see the metrics?

      1. One week

      2. 24 hours

      3. One month

      4. 15 days

    17. You are running all your AWS resources in the US-East region, and you
      are not leveraging a second region using AWS. However, you want to
      keep your infrastructure as code so that you should be able to fail over
      to a different region if any DR happens. Which AWS service will you
      choose to provision the resources in a second region that looks identical
      to your resources in the US-East region?

      1. Amazon EC2, VPC, and RDS

      2. Elastic Beanstalk

      3. OpsWorks

      4. CloudFormation

    18. What is the AWS service you are going to use to monitor the service
      limit of your EC2 instance?

      1. EC2 dashboard

      2. AWS Trusted Advisor

      3. AWS CloudWatch

      4. AWS Config

    19. You are a developer and want to deploy your application in AWS. You
      don’t have an infrastructure background and are not sure about how to
      use infrastructure within AWS. You are looking for deploying your
      application in such a way that the infrastructure scales on its own, and at
      the same time you don’t have to deal with managing it. Which AWS
      service are you going to choose for this?

      1. AWS Config

      2. AWS Lambda

      3. AWS Elastic Beanstalk

      4. Amazon EC2 servers and Auto Scaling

    20. In the past, someone made some changes to your security group, and as a
      result an instance is not accessible by the users for some time. This
      resulted in nasty downtime for the application. You are looking to find
      out what change has been made in the system, and you want to track it.
      Which AWS service are you going to use for this?

      1. AWS Config

      2. Amazon CloudWatch

      3. AWS CloudTrail

      4. AWS Trusted Advisor


Answers

  1. C, D. Perl and Ruby are not supported by Lambda.

  2. D. Lambda is not a good fit because the maximum execution time for
    code in Lambda is five minutes. Using Batch you can run your code for
    as long as you want. Similarly, you can run your code for as long as you
    want on EC2 servers or by using Elastic Beanstalk.

  3. B. Kinesis Data Firehose is used mainly for large amounts of
    nonstreaming data, Kinesis Data Analytics is used for transforming data,
    and API Gateway is used for managing APIs.

  4. C. EC2 servers and Elastic Beanstalk both need you to manage some
    infrastructure; Kinesis Data Firehose is used for ingesting data.

  5. A. Using EC2 servers or Amazon EMR, you can transform the data, but
    that is not the easiest way to do it. S3 is just the data store; it does not
    have any transformation capabilities.

  6. A. DynamoDB, S3, and AWS Lambda all are serverless.

  7. D. You can use S3 for storing the data, but if the requirement is to ingest
    the data in real time, S3 is not the right solution.

  8. B. Kinesis Data Streams is used for ingesting real-time data, and
    Kinesis Data Analytics is used for transformation. S3 is used to store
    the data.

  9. A. Theoretically EC2 servers can be used for managing the APIs, but if
    you can do it easily through API Gateway, why would you even
    consider EC2 servers? Lambda and Batch are used for executing the
    code.

  10. D. There is no need to provision EC2 servers since Lambda is
    serverless.

  11. A, C. AWS OpsWorks supports Chef and Puppet.

  12. D. Using SQS, you can decouple the ordering and shipping processes,
    and you can create separate queues for the ordering and shipping
    processes.

  13. A. Using AWS Organizations, you can manage the billing from various
    AWS accounts.

  14. C. You can add more spot instances to your task node to finish the job
    early. Spot instances are the cheapest in cost, so this will make sure the
    solution is cost-effective.

  15. B. Using AWS CloudTrail, you can find out who has changed what via
    API.

  16. D. When CloudWatch is enabled for a one-minute data point, the
    retention is 15 days.

  17. D. Using CloudFormation, you can keep the infrastructure as code, and
    you can create a CloudFormation template to mimic the setup in an
    existing region and can deploy the CloudFormation template in a
    different region to create the resources.

  18. B. Using Trusted Advisor, you can monitor the service limits for the
    EC2 instance.

  19. C. AWS Elastic Beanstalk is an easy-to-use service for deploying and
    scaling web applications. You can simply upload your code and Elastic
    Beanstalk automatically handles the deployment, from capacity
    provisioning, load balancing, and auto-scaling to application health
    monitoring.

  20. A. AWS Config maintains the configuration of the system and helps you
    to identify what change was made in it.

storing, querying, and updating items in a document format such as JSON,
XML, and HTML.

Amazon ElastiCache is a web service that makes it easy to deploy,
operate, and scale an in-memory cache in the cloud. Amazon ElastiCache
currently supports two different in-memory key-value engines: Memcached
and Redis. You can choose the engine you prefer when launching an
ElastiCache cache cluster.

Amazon Neptune is a fully managed graph database service. It is used to
store and query highly connected data containing billions of nodes and
relationships. Using Amazon Neptune, you can create, store, and query highly
connected graph datasets. With Neptune, your applications can identify and
take advantage of the rich relationships between entities in the graph.

Amazon DocumentDB is a fully managed document database service that
supports MongoDB workloads. Using this service, you can store, query, and
index JSON data. DocumentDB is MongoDB compatible, which means a
vast majority of the applications, drivers, and tools you already use today
with your MongoDB database can be used with Amazon DocumentDB with
little or no change. DocumentDB is mainly used to store semi-structured data
as documents. The documents in a document database are stored in key-value
pairs, which define the structure or the schema of the document database.


Questions

  1. You are running your MySQL database in RDS. The database is critical
    for you, and you can’t afford to lose any data in the case of any kind of
    failure. What kind of architecture will you go with for RDS?

    1. Create the RDS across multiple regions using a cross-regional
      read replica

    2. Create the RDS across multiple AZs in master standby mode

    3. Create the RDS and create multiple read replicas in multiple AZs
      with the same region

    4. Create a multimaster RDS database across multiple AZs

  2. Your application is I/O bound, and your application needs around
    36,000 IOPS. The application you are running is critical for the

    business. How can you make sure the application always gets all the
    IOPS it requests and the database is highly available?

    1. Install the database in EC2 using an EBS-optimized instance, and
      choose a I/O optimized instance class with an SSD-based hard
      drive

    2. Install the database in RDS using SSD

    3. Install the database in RDS in multi-AZs using Provisioned IOPS
      and select 36,000 IOPS

    4. Install multiple copies of read replicas in RDS so all the workload
      gets distributed across multiple read replicas and you can cater to
      the I/O requirement

  3. You have a legacy application that needs a file system in the database
    server to write application files. Where should you install the database?

    1. You can achieve this using RDS because RDS has a file system in
      the database server

    2. Install the database on an EC2 server to get full control

    3. Install the database in RDS, mount an EFS from the RDS server,
      and give the EFS mount point to the application for writing the
      application files

    4. Create the database using a multi-AZ architecture in RDS

  4. You are running a MySQL database in RDS, and you have been tasked
    with creating a disaster recovery architecture. What approach is easiest
    for creating the DR instance in a different region?

    1. Create an EC2 server in a different region and constantly replicate
      the database over there.

    2. Create an RDS database in the other region and use third-party
      software to replicate the data across the database.

    3. While installing the database, use multiple regions. This way, your
      database gets installed into multiple regions directly.

    4. Use the cross-regional replication functionality of RDS. This will
      quickly spin off a read replica in a different region that can be used
      for disaster recovery.

  5. If you encrypt a database running in RDS, what objects are going to be
    encrypted?

    1. The entire database

    2. The database backups and snapshot

    3. The database log files

    4. All of the above

  6. Your company has just acquired a new company, and the number of
    users who are going to use the database will double. The database is
    running on Aurora. What things can you do to handle the additional
    users? (Choose two.)

    1. Scale up the database vertically by choosing a bigger box

    2. Use a combination of Aurora and EC2 to host the database

    3. Create a few read replicas to handle the additional read-only
      traffic

    4. Create the Aurora instance across multiple regions with a
      multimaster mode

  7. Which RDS engine does not support read replicas?

    1. MySQL

    2. Aurora MySQL

    3. PostgreSQL

    4. Oracle

  8. What are the various ways of securing a database running in RDS?
    (Choose two.)

    1. Create the database in a private subnet

    2. Encrypt the entire database

    3. Create the database in multiple AZs

    4. Change the IP address of the database every week

  9. You’re running a mission-critical application, and you are hosting the
    database for that application in RDS. Your IT team needs to access all
    the critical OS metrics every five seconds. What approach would you
    choose?

    1. Write a script to capture all the key metrics and schedule the script
      to run every five seconds using a cron job

    2. Schedule a job every five seconds to capture the OS metrics

    3. Use standard monitoring

    4. Use advanced monitoring

  10. Which of the following statements are true for Amazon Aurora?
    (Choose three.)

    1. The storage is replicated at three different AZs.

    2. The data is copied at six different places.

    3. It uses a quorum-based system for reads and writes.

    4. Aurora supports all the commercial databases.

  11. Which of the following does Amazon DynamoDB support? (Choose
    two.)

    1. Graph database

    2. Key-value database

    3. Document database

    4. Relational database

  12. I want to store JSON objects. Which database should I choose?

    1. Amazon Aurora for MySQL

    2. Oracle hosted on EC2

    3. Amazon Aurora for PostgreSQL

    4. Amazon DynamoDB

  13. I have to run my analytics, and to optimize I want to store all the data in
    columnar format. Which database serves my need?

    1. Amazon Aurora for MySQL

    2. Amazon Redshift

    3. Amazon DynamoDB

    4. Amazon Aurora for Postgres

  14. What are the two in-memory key-value engines that Amazon
    ElastiCache supports? (Choose two.)

    1. Memcached

    2. Redis

    3. MySQL

    4. SQL Server

  15. You want to launch a copy of a Redshift cluster to a different region.
    What is the easiest way to do this?

    1. Create a cluster manually in a different region and load all the data

    2. Extend the existing cluster to a different region

    3. Use third-party software like Golden Gate to replicate the data

    4. Enable a cross-region snapshot and restore the database from the
      snapshot to a different region


Answers

  1. B. If you use a cross-regional replica and a read replica within the same
    region, the data replication happens asynchronously, so there is a chance
    of data loss. Multimaster is not supported in RDS. By creating the
    master and standby architecture, the data replication happens
    synchronously, so there is zero data loss.

  2. C. You can choose to install the database in EC2, but if you can get all
    the same benefits by installing the database in RDS, then why not? If you
    install the database in SSD, you don’t know if you can meet the 36,000
    IOPS requirement. A read replica is going to take care of the read-only
    workload. The requirement does not say the division of read and write
    IO between 36,000 IOPS.

  3. B. In this example, you need access to the operating system, and RDS
    does not give you access to the OS. You must install the database in an
    EC2 server to get complete control.

  4. D. You can achieve this by creating an EC2 server in a different region
    and replicating, but when your primary site is running on RDS, why not
    use RDS for the secondary site as well? You can use third-party
    software for replication, but when the functionality exists out of the box
    in RDS, why pay extra to any third party? You can’t install a database
    using multiple regions out of the box.

  5. D. When you encrypted a database, everything gets encrypted, including
    the database, backups, logs, read replicas, snapshot, and so on.

  6. A, C. You can’t host Aurora on an EC2 server. Multimaster is not
    supported in Aurora.

  7. D. Only RDS Oracle does not support read replicas; the rest of the
    engines do support it.

  8. A, B. Creating the database in multiple AZs is going to provide high
    availability and has nothing to do with security. Changing the IP address
    every week will be a painful activity and still won’t secure the database
    if you don’t encrypt it.

  9. D. In RDS, you don’t have access to OS, so you can’t run a cron job.
    You can’t capture the OS metrics by running a database job. Standard
    monitoring provides metrics for one minute.

  10. A, B, C. Amazon Aurora supports only MySQL and PostgreSQL. It does
    not support commercial databases.

  11. B, C. Amazon DynamoDB supports key-value and document structures.
    It is not a relational database. It does not support graph databases.

  12. D. A JSON object needs to be stored in a NoSQL database. Amazon
    Aurora for MySQL and PostgreSQL and Oracle are relational
    databases.

  13. B. Amazon Redshift stores all the data in columnar format. Amazon
    Aurora for MySQL and PostgreSQL store the database in row format,
    and Amazon DynamoDB is a NoSQL database.

  14. A, B. MySQL and SQL Server are relational databases and not in-
    memory engines.

  15. D. Loading the data manually will be too much work. You can’t extend
    the cluster to a different region. A Redshift cluster is specific to a
    particular AZ. It can’t go beyond an AZ as of the writing this book.
    Using Golden Gate is going to cost a lot, and there is no need for it
    when there is an easy solution available.

    principles: choose the best consumption model, use managed services,
    measure the overall efficiency, analyze the expenditure, and stop spending on
    a data center.

    These are the best practices in the cloud:


Questions

  1. How do you protect access to and the use of the AWS account’s root
    user credentials? (Choose two.)

    1. Never use the root user

    2. Use multifactor authentication (MFA) along with the root user

    3. Use the root user only for important operations

    4. Lock the root user

  2. What AWS service can you use to manage multiple accounts?

    1. Use QuickSight

    2. Use Organization

    3. Use IAM

    4. Use roles

  3. What is an important criterion when planning your network topology in
    AWS?

    1. Use both IPv4 and IPv6 IP addresses.

    2. Use nonoverlapping IP addresses.

    3. You should have the same IP address that you have on-premise.

    4. Reserve as many EIP addresses as you can since IPv4 IP
      addresses are limited.

  4. If you want to provision your infrastructure in a different region, what is
    the quickest way to mimic your current infrastructure in a different
    region?

    1. Use a CloudFormation template

    2. Make a blueprint of the current infrastructure and provision the
      same manually in the other region

    3. Use CodeDeploy to deploy the code to the new region

    4. Use the VPC Wizard to lay down your infrastructure in a different
      region

  5. Amazon Glacier is designed for which of the following? (Choose two.)

    1. Active database storage

    2. Infrequently accessed data

    3. Data archives

    4. Frequently accessed data

    5. Cached session data

  6. Which of the following will occur when an EC2 instance in a VPC with
    an associated elastic IP is stopped and started? (Choose two.)

    1. The elastic IP will be dissociated from the instance.

    2. All data on instance-store devices will be lost.

    3. All data on Elastic Block Store (EBS) devices will be lost.

    4. The Elastic Network Interface (ENI) is detached.

    5. The underlying host for the instance is changed.

  7. An instance is launched into the public subnet of a VPC. Which of the
    following must be done for it to be accessible from the Internet?

    1. Attach an elastic IP to the instance.

    2. Nothing. The instance is accessible from the Internet.

    3. Launch a NAT gateway and route all traffic to it.

    4. Make an entry in the route table, passing all traffic going outside
      the VPC to the NAT instance.

  8. To protect S3 data from both accidental deletion and accidental
    overwriting, you should:

    1. Enable S3 versioning on the bucket

    2. Access S3 data using only signed URLs

    3. Disable S3 delete using an IAM bucket policy

    4. Enable S3 reduced redundancy storage

    5. Enable multifactor authentication (MFA) protected access

  9. Your web application front end consists of multiple EC2 instances
    behind an elastic load balancer. You configured an elastic load balancer
    to perform health checks on these EC2 instances. If an instance fails to
    pass health checks, which statement will be true?

    1. The instance is replaced automatically by the elastic load balancer.

    2. The instance gets terminated automatically by the elastic load
      balancer.

    3. The ELB stops sending traffic to the instance that failed its health
      check.

    4. The instance gets quarantined by the elastic load balancer for root-
      cause analysis.

  10. You are building a system to distribute confidential training videos to
    employees. Using CloudFront, what method could be used to serve
    content that is stored in S3 but not publicly accessible from S3 directly?

    1. Create an origin access identity (OAI) for CloudFront and grant
      access to the objects in your S3 bucket to that OAI

    2. Add the CloudFront account security group called “amazon-
      cf/amazon-cf-sg” to the appropriate S3 bucket policy

    3. Create an Identity and Access Management (IAM) user for
      CloudFront and grant access to the objects in your S3 bucket to that
      IAM user

    4. Create an S3 bucket policy that lists the CloudFront distribution ID
      as the principal and the target bucket as the Amazon resource name

(ARN)


Answers

  1. A, B. It is critical to keep the root user’s credentials protected, and to
    this end, AWS recommends attaching MFA to the root user and locking
    the credentials with the MFA in a physically secured location. IAM
    allows you to create and manage other nonroot user permissions, as
    well as establish access levels to resources.

  2. B. QuickSight is used for visualization. IAM can be leveraged within
    accounts, and roles are also within accounts.

  3. B. Using IPv4 or IPv6 depends on what you are trying to do. You can’t
    have the same IP address, or when you integrate the application on-
    premise with the cloud, you will end up with overlapping IP addresses,
    and hence your application in the cloud won’t be able to talk with the
    on-premise application. You should allocate only the number of EIPs

    you need. If you don’t use an EIP and allocate it, you are going to incur a
    charge on it.

  4. A. Creating a blueprint and working backward from there is going to be
    too much effort. Why you would do that when CloudFormation can do it
    for you? CodeDeploy is used for deploying code, and the VPC Wizard
    is used to create VPCs.

  5. B, C. Amazon Glacier is used for archival storage and for archival
    purposes.

  6. A, D. If you have any data in the instance store, that will also be lost,
    but you should not choose this option since the question is regarding
    elastic IP.

  7. B. Since the instance is created in the public subnet and an Internet
    gateway is already attached with a public subnet, you don’t have to do
    anything explicitly.

  8. A. Signed URLs won’t help, even if you disable the ability to delete.

  9. C. The ELB stops sending traffic to the instance that failed its health
    check.

  10. A. Create an OAI for CloudFront and grant access to the objects in your
    S3 bucket to that OAI.


    CHAPTER 1


    Review Questions

    You can find the answers in the Appendix.


    1. You are tasked with managing multiple AWS accounts for a
      large organization. What AWS service provides bulk account
      management and consolidated billing?

      1. AWS Identity and Access Management (IAM)

      2. AWS Organizations

      3. AWS Trusted Advisor

        CHAPTER 1


      4. AWS Billing Manager

    2. Which AWS service should you use to monitor applications and
      how they interact with your APIs?

      1. CloudTrail

      2. APIWatch

      3. CloudWatch

      4. APITrail

    3. You are a new hire at a company with several cloud
      applications. They currently have no monitoring in place for
      their applications. What is the first service you'd look into
      adding to their cloud setup?

      1. CloudTrail

      2. CloudWatch

      3. Trusted Advisor

      4. System Monitor

    4. Which of the following AWS facilities allows an application's
      resources to grow and shrink with demand?

      1. Elastic Load Balancing

      2. Elastic Compute

      3. Auto Scaling

      4. Route53

    5. Which of the following AWS facilities are part of a scalable
      cluster of EC2 instances? (Choose two.)

      1. Elastic load balancer

      2. CloudFront

      3. Auto Scaling groups

      4. Lambda

        CHAPTER 1


    6. Which of the following are AWS storage services? (Choose
      two.)

      1. EBS

      2. EC2

      3. RDS

      4. VPC

    7. What AWS service provides users, groups, roles, and policies?

      1. Identity and Authorization Management

      2. Identity and Access Management

      3. Information and Authorization Management

      4. Identity and Authentication Management

    8. Which of the following statements are true? (Choose all that
      apply.)

      1. AWS is responsible for the security of the cloud.

      2. AWS is responsible for security in the cloud.

      3. You (the customer) are responsible for the security of the
        cloud.

      4. You (the customer) are responsible for security in the
        cloud.

    9. Who is responsible for the security of regions and availability
      zones?

      1. AWS

      2. The customer

      3. The account owner

      4. Responsibility is shared between the customer and AWS.

    10. Which of the following is the basic networking component of
      AWS that contains subnets and instances?

      CHAPTER 1

      1. VPC

      2. VPN

      3. CLI

      4. Elastic Beanstalk

    11. You are tasked with creating a uniform set of deployment
      scripts. What AWS facility would you use to standardize your
      application deployment and provisioning?

      1. CloudFront

      2. CloudFormation

      3. JSON

      4. CloudLaunch

    12. Which of the following is not an AWS support plan?

      1. Free

      2. Basic

      3. Developer

      4. Enterprise

    13. What AWS component acts as an analog to firewalls in on-
      premises applications?

      1. Network ACLs

      2. Internet Gateway

      3. Amazon VPC

      4. CloudFormation templates

    14. What tool would you use to manage and interact with your
      AWS resources from a terminal or command prompt?

      1. AWS console

      2. AWS CLI

        CHAPTER 1


      3. AWS TLI

      4. AWS CloudFormation

    15. You are tasked with creating a network environment for a
      company that is moving their web applications into AWS.
      Which of the following AWS services are most important to
      creating this environment? (Choose two.)

      1. AWS CloudFormation

      2. Amazon EC2

      3. Amazon VPC

      4. Amazon RDS

    16. You are tasked with preparing a report on the advantages of
      AWS as compared to on-premises systems. As part of the
      report, you need to explain the responsiveness of AWS in
      dealing with services in the event of an outage. What would you
      need to consult to provide statistics and response times?

      1. Amazon VPC

      2. AWS Shared Responsibility Model

      3. AWS CloudFormation

      4. AWS Service Level Agreement

    17. You are tasked with preparing a report on the advantages of
      AWS as compared to on-premises systems. As part of the
      report, you need to explain which parts of the current
      architecture will no longer be the responsibility of your
      company to maintain. What would you need to consult to
      provide statistics and response times?

      1. Amazon VPC

      2. AWS Shared Responsibility Model

      3. AWS CloudFormation

      4. AWS Service Level Agreement

        CHAPTER 1


    18. Which of the following represents a separate geographic region
      in which AWS services run?

      1. Availability zone

      2. Region

      3. Edge location

      4. Compute center

    19. How many availability zones does each AWS region have?

      1. 2

      2. 3

      3. 5

      4. It varies based on the region and AWS resource
        requirements.

    20. Which of the following acts as a virtual datacenter within AWS?

      1. Compute center

      2. Region

      3. Availability zone

      4. Edge location

CHAPTER 2

Review Questions

You can find the answers in the Appendix.


  1. What is the default frequency at which CloudWatch collects
    metrics?

    1. 30 seconds

    2. 1 minute

    3. 5 minutes

    4. 10 minutes

  2. Which CloudWatch metric should you look at to determine
    how many of available input/output operations per second
    (IOPS) have been delivered on an EBS volume?

    1. ReadWriteThroughputPercentage

    2. ThroughputPercentage

    3. VolumeConsumedReadWriteOps

    4. VolumeThroughputPercentage

  3. What does the CloudWatch metric VolumeIdleTime report?

    1. The total number of minutes in a period of time when no
      read or write operations were submitted

    2. The total number of seconds in a period of time when no
      read or write operations were submitted

    3. The total number of minutes in a period of time the
      volume was waiting on an instance to complete a data
      transfer

    4. The total number of seconds in a period of time the volume
      was waiting on an instance to complete a data transfer

  4. Which CloudWatch metric would you use to see how much of
    available CPU is being used by a stalled EC2 instance?

    CHAPTER 2

    1. CPUUsage

    2. ComputeUtilization

    3. CPUUtilization

    4. ReadWriteUtilization

  5. Why might you use a resource group in CloudWatch?

    1. You need to monitor EC2 instances that are in multiple
      regions.

    2. You need to monitor EC2 instances that are in multiple
      availability zones.

    3. You need to monitor EC2 instances, S3 buckets, and an
      ECS cluster through a single dashboard.

    4. You need to monitor nondefault metrics on a set of EC2
      instances and S3 buckets.

  6. You want to enable detailed monitoring on an EC2 instance.
    What steps should you follow?

    1. Stop the instance, select Enable Detailed Monitoring, and
      restart the instance.

    2. Select the instance and select Enable Detailed Monitoring.

    3. Stop the instance, terminate the instance, create a new
      instance, select Enable Detailed Monitoring, and start the
      new instance.

    4. Snapshot the instance, create a new instance from the
      snapshot, select Enable Detailed Monitoring on the new
      instance, and then start the new instance.

  7. What mechanism in AWS is used to group resources into a
    resource group?

    1. The user-defined tags on the resources

    2. The user-defined IAM role on the resources

      CHAPTER 2

    3. Shared characteristics in the resource names

    4. Ownership of the resources

  8. Which of the following is not supplied as a default metric in
    CloudWatch?

    1. Memory Usage

    2. CPU Usage

    3. Disk Usage

    4. Network IO

  9. What is the most frequent granularity that CloudWatch
    performs for updating the status of a standard metric?

    1. 30 seconds

    2. 1 minute

    3. 90 seconds

    4. 5 minutes

  10. What different levels of monitoring does CloudWatch offer?
    (Choose two.)

    1. Free

    2. Basic

    3. Frequent

    4. Detailed

  11. Which of the following are monitored on your EC2 instances by
    default CloudWatch metrics? (Choose two.)

    1. CPU

    2. Memory

    3. Throughput

    4. Status

      CHAPTER 2


  12. Which of the following statements is true regarding how
    CloudWatch monitors an Auto Scaling group created in the
    AWS management console versus one created via the CLI?

    1. An Auto Scaling group created using the CLI will use basic
      monitoring, but one created using the console will use
      detailed monitoring.

    2. An Auto Scaling group created using the CLI will use
      detailed monitoring, but one created using the console will
      use basic monitoring.

    3. Regardless of the creation method, Auto Scaling groups use
      basic monitoring by default.

    4. Regardless of the creation method, Auto Scaling groups use
      detailed monitoring by default.

  13. Which of the following is not possible using a custom
    CloudWatch metric?

    1. Scaling in an Auto Scaling group based on a concurrent
      connections metric

    2. Scaling out an Auto Scaling group based on number of
      requests received by the group

    3. Scaling out an Auto Scaling group based on number of
      active threads in the group

    4. Scaling in an Auto Scaling group when CPU usage gets low

  14. You have a custom CloudWatch metric that is monitoring
    network spikes on requests coming into your DynamoDB
    instances. You are seeing recurring spikes every third, fourth,
    and fifth minute of the hour. One of your developers believes
    the culprit is a long-running process triggered from an EC2
    instance. How can you best validate your developer's
    hypothesis without affecting system performance?

    1. Increase the frequency of the metric collection to every 10
      seconds and see if the spikes are persistent or they happen

      CHAPTER 2


      only at certain times within the third, fourth, and fifth
      minutes.

    2. Increase the frequency of the metric collection to every 10
      seconds and add an additional metric to monitor bytes out
      from the EC2 instance's network interface.

    3. Add additional metrics to monitor bytes out from the EC2
      instance running the process and see if there is a
      correspondence between the bytes out of the instance and
      the bytes into the DynamoDB instances.

    4. Turn the process on the instance off and see if the network
      spikes still occur on the DynamoDB instances.

  15. You have enabled detailed monitoring for CloudWatch on all
    standard metrics. How often will metrics be reported?

    1. 30 seconds

    2. 1 minute

    3. 5 minutes

    4. 10 minutes

  16. How often can a high-resolution metric be reported?

    1. 30 seconds

    2. 1 minute

    3. 1 second

    4. 1 millisecond

  17. Which of the following is not a cause for a CloudWatch Event
    being triggered?

    1. A preset schedule causes the triggering of the event.

    2. An EC2 instance starts up.

    3. A user logs into the AWS console.

    4. Code on an EC2 instance makes a request to a REST API.

      CHAPTER 2


  18. Which of the following is the prefix to predefined AWS events?

    1. AMZ

    2. AWS

    3. Amazon

    4. AMZN

  19. Which of the following would require custom programming—
    beyond easily defined CloudWatch alarms—to monitor?

    1. Network usage increasing to 80 percent or more of
      allocated capacity

    2. Network latency increasing to over 10 ms

    3. Network output dropping to 0 bytes

    4. Network usage dropping by more than 50 percent in a
      given hour

  20. Which of the following connects CloudWatch Events to targets?

    1. Rules

    2. Triggers

    3. Metrics

    4. Outputs

CHAPTER 3

Review Questions

You can find the answers in the Appendix.


  1. You are responsible for 12 different AWS accounts. You have
    been tasked with monitoring reducing costs across these
    accounts and want to recommend AWS Organizations and its
    consolidated billing features. Which of the following could you
    use to support your argument that AWS Organizations should
    be used? (Choose two.)

    1. Traffic between accounts will not be subject to data
      transfer charges if those accounts are all in AWS
      Organizations.

    2. Multiple accounts can be combined and, through that
      combination, receive discounts that may reduce the total
      cost of all the accounts.

    3. All accounts can be tracked individually and through a
      single tool.

    4. All accounts in AWS Organizations will receive a 5 percent
      billing reduction in consolidated billing.

  2. Which of the following are not components of IAM? (Choose
    two.)

    1. Users

    2. Roles

    3. Organizational units

    4. Service control policies

  3. What is an AWS Organization OU?

    1. Orchestration unit

    2. Organizational unit

    3. Operational unit

      CHAPTER 3

    4. Offer of urgency

  4. What is an AWS Organization SCP?

    1. Service control policy

    2. Service control permissions

    3. Standard controlling permissions

    4. Service conversion policy

  5. To which of the following constructs is an AWS Organization
    SCP applied?

    1. To a service control policy

    2. To an IAM role

    3. To an organizational unit

    4. To a SAML user store

  6. Which of the following most closely mirrors an IAM
    permission document?

    1. A service control policy

    2. A service component policy

    3. An organizational unit

    4. An organizational policy

  7. To which of the following constructs can a service control
    policy be applied? (Choose two.)

    1. A user

    2. An organizational unit

    3. An account

    4. A group

  8. Which of the following is not a feature of AWS Organizations?

    1. Multi-account management

      CHAPTER 3


    2. Batch account creation

    3. Consolidated billing

    4. Multi-account permissions

  9. Which tool would you use to reduce or eliminate SSH access to
    a development account's EC2 instances?

    1. IAM

    2. CloudTrail

    3. AWS Organizations

    4. Trusted Advisor

  10. Which tool would you use to reduce or eliminate SSH access to
    all EC2 instances as a security policy in your company?

    1. IAM

    2. CloudTrail

    3. AWS Organizations

    4. Trusted Advisor

  11. What is the best reason to use AWS Organizations as the
    primary mechanism for billing management as opposed to
    resource tagging?

    1. You can tag only 100 resources in a single AWS account.

    2. You can tag only compute resources in an AWS account.

    3. Resource tags are ephemeral and are lost when a resource
      restarts.

    4. Tagging is generally not comprehensive due to low-level
      AWS system services.

  12. Which of the following is not an advantage of using AWS
    Organizations for consolidated billing?

    1. You'll receive a single bill for all of your accounts.

      CHAPTER 3


    2. You'll receive combined usage reports for resources across
      all of your accounts.

    3. You'll receive a discount on data movement between
      regions across all your accounts.

    4. You'll receive volume discounts based on usage across all
      your accounts.

  13. Your organization has 14 different accounts, all recently moved
    to management via AWS Organizations. Three accounts use
    reserved instances, each purchased at different price points.
    After moving these accounts into AWS Organizations, at what
    price are these reserved instances charged?

    1. Each account will continue to use its existing reserved
      instance hourly price.

    2. All accounts will use the lowest hourly price for all
      accounts.

    3. All accounts will use the average hourly price for all
      accounts.

    4. Hourly price for the instances will need to be recalculated
      by the AWS account Technical Account Manager (TAM).

  14. Which of the following might you use in setting up
    standardized development, test, and production accounts for
    your organization? (Choose two.)

    1. Organizational units

    2. Service control policies

    3. Consolidated billing

    4. Resource tagging

  15. Which of the following might you use in centralizing billing
    management of development, test, and production accounts for
    your organization? (Choose two.)

    1. Organizational units

      CHAPTER 3


    2. Service control policies

    3. Consolidated billing

    4. Resource tagging

  16. How many master accounts should an organization have?

    1. At least one

    2. Exactly one

    3. Two or more

    4. One for every region in the organization

  17. How many member accounts should an organization have?

    1. At least one

    2. Exactly one

    3. Two or more

    4. One for every region in the organization

  18. To how many organizational units can an account belong?

    1. Exactly one

    2. One or more

    3. One for every region in which the account has resources

    4. One for every account in the organization

  19. To how many OUs can another organizational unit belong?

    1. Zero, since nesting OUs is disallowed

    2. Exactly 0ne

    3. One or more

    4. One for every account in the organization

  20. You have taken responsibility for a company's multiple AWS
    accounts. They currently have eight accounts and receive a bill
    for each account monthly. They would like to receive a single

    CHAPTER 3


    bill each month. Which of the following steps are required to
    implement this change? (Choose two.)

    1. Set up AWS Organizations.

    2. Turn on consolidated billing.

    3. Create a service control policy and apply it to all of the
      organization's accounts.

    4. Choose your master account from your available accounts,
      or create a new master account.

CHAPTER 4

  1. Which of the following does AWS Config provide? (Choose
    two.)

    1. Continuous deployment

    2. Continuous integration

    3. Continuous monitoring

    4. Continuous assessment

  2. You have set up AWS Config and want to notify your systems
    administrators if a change has been made. To what service
    should you connect AWS Config?

    1. AWS CloudTrail

    2. AWS CloudWatch.

    3. SNS.

    4. S

  3. Where does AWS Config store configuration for the various
    services it monitors?

    1. RDS

    2. S3

    3. DynamoDB

    4. EFS

  4. Which of the following are not parts of a configuration item for
    a resource in the cloud? (Choose two.)

    1. A map of relationships between the resource and other
      resources

    2. AWS CloudWatch event IDs related to the resource

    3. Configuration data specific to the resource

    4. Metadata about connected resources

      CHAPTER 4

  5. You have a configuration item for an EC2 instance. Which of
    the following might be part of a configuration item for this
    instance? (Choose two.)

    1. The user who created the EC2 instance

    2. The instance type of the EC2 instance

    3. The time that the configuration item was captured

    4. How long the EC2 instance has been running

  6. You have created a custom rule and want to add it to AWS
    Config. What do you need to do to ensure evaluation of the
    rule?

    1. Create an EC2 instance and upload code to evaluate the
      rule to that instance.

    2. Create a Lambda function and upload code to evaluate the
      rule to the function.

    3. Paste code to evaluate the rule into the Add Evaluation
      Rule box in the AWS management console for the rule.

    4. Create a CloudFormation template and add code to
      evaluate the rule to the template.

  7. Which of the following are types of triggers for AWS Config
    rules? (Choose two.)

    1. Configuration changes

    2. Cyclic

    3. Periodic

    4. Recurring

  8. Which of the following are part of the resource configuration
    history that AWS Config provides? (Choose two.)

    1. A record of who made a change to a resource

    2. The source IP address of an API call to a REST API

      CHAPTER 4


    3. The source IP address of a change made to the size of an
      EBS volume

    4. The number of AWS console logins on a given day

  9. How can you configure AWS Config to prevent noncompliant
    changes to resources?

    1. Turn on Ensure Compliancy in the AWS management
      console under the AWS Config section.

    2. Use the AWS CLI to enable the Ensure Compliance option
      in AWS Config.

    3. Write AWS Config rules to prevent changes from being
      made.

    4. You cannot prevent changes with AWS Config.

  10. How is AWS Config enabled on an AWS account?

    1. Once for the entire account

    2. Once for every region in the account

    3. AWS Config can be turned on or off multiple times but is
      configured on a per-region basis.

    4. AWS Config can be turned on or off multiple times, but
      that enabling applies to the entire account.

  11. To what does the term continuous integration refer?

    1. The ongoing integration of code into a version repository,
      typically with automatic testing ensuring no regressions
      are introduced by the new code

    2. The ongoing integration of configuration changes into
      AWS, typically with automated testing to ensure no
      regressions are introduced by the new configuration

    3. The ongoing integration of new development practices into
      a team, especially related to testing and deployment

      CHAPTER 4


    4. The ongoing integration of new releases into a particular
      environment, typically with automated testing of the
      deployment after it completes

  12. How many rules can you create by default in a single AWS
    account?

    1. 25

    2. 50

      C. 100

      D. 150

  13. Which of the following are required to create a new rule in
    AWS Config? (Choose two.)

    1. Whether the rule is change-triggered or periodic

    2. The ID or type of the resource to monitor

    3. A tag key to match on a resource

    4. The target to send the rule notification to

  14. Which of the following are allowed frequencies for periodic
    rules? (Choose two.)

    1. 5 minutes

    2. 1 hour

    3. 12 hours

    4. 48 hours

  15. You have recently added a number of AWS Config rules to
    ensure your resources are compliant. However, despite adding
    these rules, you are still receiving notices from your compliance
    team that resources are not correctly configured. What could be
    the source of this problem?

    1. Your config rules are likely inactive. Once you create a rule,
      you must set that rule to active to ensure resources are

      CHAPTER 4


      kept compliant.

    2. Your compliance rules do not match the compliance
      requirements from your organization's compliance team.
      Ensure that the rules are a match for requirements.

    3. Reduce the time between compliance checks in AWS
      Config. This will ensure less noncompliant time for
      resources that fall out of compliance.

    4. AWS Config rules do not prevent resources from falling out
      of compliance. They only notify when that has occurred.
      You would need to write code in a Lambda or other method
      to restore a resource back into compliance.

  16. You have created three rules in AWS Config related to
    CloudTrail. This rule checks to see if CloudTrail is enabled in
    your AWS account, if CloudTrail is configured to use server-
    side encryption keys, and if file validation is turned on for all
    trails. In your environment, CloudTrail is currently enabled and
    using server-side encryption but is not doing file validation on
    all trails. What would you expect the evaluation of this ruleset
    to return?

    1. Compliant

    2. Partially Compliant

    3. Noncompliant

    4. You will receive two Compliant evaluations and one
      Noncompliant evaluation.

  17. Which of the following questions does AWS Config not provide
    you with a means of answering?

    1. “What did my AWS resource look like yesterday at 8:00
      p.m.?”

    2. “What should my AWS resource look like to be in
      compliance with my organization's policies?”

    3. “Who made an API call to modify this resource?”

      CHAPTER 4


    4. “Which of my AWS resources are out of compliance with
      my preset organizational policies?”

  18. Which of the following services should you use to monitor all
    of your resources via AWS Config across multiple accounts and
    regions? (Choose two.)

    1. Consolidated Billing

    2. AWS Organizations

    3. Multi-Account Multi-Region Data Aggregation

    4. Multi-Account Authorization and Aggregation

  19. Which of the following steps are required to aggregate
    configuration data across multiple AWS accounts? (Choose
    two.)

    1. Create an S3 bucket for storing the information.

    2. Apply IAM policies to the bucket to allow writing to it from
      the other AWS accounts AWS Config service.

    3. Use the AWS Log Aggregator service to aggregate logs
      across the different accounts.

    4. Set up an SNS topic for notifications.

  20. Recently, your AWS costs have risen significantly and are
    attached to the AWS Config service. When you open up AWS
    Config, you find a huge number of rules and configuration
    items that are unfamiliar to you. How can you determine who
    added these rules to AWS Config?

    1. You can't; because only administrators can access AWS
      Config, all access is considered valid and not logged.

    2. You need to check S3 for the automatically generated AWS
      Config logs.

    3. You need to check CloudTrail, as API access to AWS Config
      is logged just as it is to any other resource with an API.

      CHAPTER 4


    4. The AWS Console shows a history of who created all rules.

CHAPTER 5




6. At this point, you can subscribe to this topic using a variety
of options: an application, Apple's Push Notification service,
or Google's Cloud Messaging service. For more on setting up
these services, consult
https://aws.amazon.com/blogs/aws/push-notifications-to-
mobile-devices-using-amazon-sns/ or
https://docs.aws.amazon.com/sns/latest/dg/sns-mobile-
application-as-subscriber.html.



Review Questions

You can find the answers in the Appendix.


  1. Which AWS tool would you use to monitor performance of
    your application?

    1. CloudWatch

    2. CloudTrail

    3. AWS Config

    4. AWS Organizations

  2. Which AWS tool would you use to audit API usage of your
    application?

    1. CloudWatch

    2. CloudTrail

    3. AWS Config

    4. AWS Organizations

  3. Which AWS tool would you use to audit configuration changes
    to your AWS environment?

    1. CloudWatch

    2. CloudTrail

CHAPTER 5

C. AWS Config

D. AWS Organizations

  1. Your management is concerned that too many people are using
    the AWS Config tool, potentially violating security protocols.
    What tool would you use to audit the usage of the AWS Config
    tool?

    1. CloudWatch

    2. CloudTrail

    3. AWS Config

    4. AWS Organizations

  2. You have created a trail with the default settings to log access to
    Lambda functions. You currently have functions in US East 2
    and US West 1. You're launching a new Lambda function in US
    East 1 and want to ensure that CloudTrail logs access to this
    function as well. What do you need to do?

    1. Update the trail configuration and add US East 1 as a
      region to monitor.

    2. Update the trail configuration and provide an S3 bucket for
      logged events in US East 1.

    3. Restart the trail in CloudTrail.

    4. Nothing. Access to the new Lambda function in the new
      region will automatically be handled by the existing trail.

  3. How many trails can you create in a single region?

    1. 3

    2. 5

    3. 20

    4. There is no preset limit on the number of trails you can
      have in a region.

      CHAPTER 5

  4. You have eight trails in your AWS CloudTrail configuration.
    Three apply to all regions and deposit logs in an S3 bucket in
    EU West 1. Two trails are single region, in EU West 2,
    depositing logs in an S3 bucket in EU West 2. One is in EU
    West 1 and uses the same EU West 1 bucket as the cross-region
    trails. Finally, you have a trail in US West 2. In what region
    must you locate the S3 bucket for the US West 2 trail to deposit
    logs?

    1. US West 2

    2. EU West 1

    3. EU West 2

    4. Any region you like

  5. You have eight trails in your AWS CloudTrail configuration.
    Three apply to all regions and deposit logs in an S3 bucket in
    EU West 1. Two trails are single region, in EU West 2,
    depositing logs in an S3 bucket in EU West 2. One is in EU
    West 1 and uses the same EU West 1 bucket as the cross-region
    trails. Finally, you have a trail in US West 2 writing logs to an
    S3 bucket in US West 2. How many more trails can you create
    in EU West 2 if those trails are intended to work across all
    regions?

    1. 0

    2. 1

    3. 2

    4. 3

  6. You have eight trails in your AWS CloudTrail configuration.
    Three apply to all regions and deposit logs in an S3 bucket in
    EU West 1. Two trails are single region, in EU West 2,
    depositing logs in an S3 bucket in EU West 2. One is in EU
    West 1 and uses the same EU West 1 bucket as the cross-region
    trails. Finally, you have a trail in US West 2 writing logs to an

    CHAPTER 5

    S3 bucket in US West 2. You are trying to create a new trail to
    function across all regions but are getting an error. What is
    preventing you from creating this trail?

    1. You have already created the maximum number of cross-
      region trails (three).

    2. You have already created the maximum number of trails
      for a single account (seven).

    3. You have already created the maximum number of trails in
      EU West 1 (five).

    4. You have already created the maximum number of trails in
      EU West 2 (five).

  7. What is AWS’ system for sending out alerts and alarms based
    on specific events in an environment?

    1. SQS

    2. SNS

    3. SWF

    4. CloudTrail

  8. Which services listed here might be used as part of a solution to
    monitor potentially insecure interactions between an AWS
    application's API layer and non-AWS services? (Choose two.)

    1. SNS

    2. SWF

    3. CloudWatch

    4. CloudTrail

  9. Which of the following services might be used to detect a
    potential security breach of your applications running in AWS?
    (Choose two.)

    1. CloudWatch

      CHAPTER 5

    2. CloudTrail

    3. Trusted Advisor

    4. SWF

  10. You are in charge of a cloud migration from an on-premises
    datacenter to AWS. The system currently has a number of
    custom scripts that process system and application logs for
    auditing purposes. What AWS managed service could you use
    to replace these scripts and reduce the need for instances to run
    these custom processes?

    1. CloudWatch

    2. CloudTrail

    3. Trusted Advisor

    4. SWF

  11. You have just started working at a new organization with
    existing AWS accounts. What do you need to do in order to set
    up CloudTrail on these accounts?

    1. Turn on the CloudTrail service.

    2. Create a new trail for the CloudTrail service.

    3. Nothing; CloudTrail is automatically on and already
      logging activity.

    4. Enable AWS Organizations and set up a service control
      policy that allows CloudTrail access.

  12. Which of the following services is not supported by CloudTrail?

    1. Amazon Athena

    2. Amazon CloudFront

    3. AWS Elastic Beanstalk

    4. All of the above services are supported by CloudTrail.

      CHAPTER 5


  13. When applying a trail to all regions, how many actual trails are
    created?

    1. A single trail is used across all the regions.

    2. Trails that are configured like Auto Scaling groups will
      automatically grow and collapse based on total volume
      across all the regions.

    3. One trail is created for each region, and a master trail is
      created in the default region.

    4. One trail is created for each region.

  14. Which of the following is not an option for encrypting and
    securing log files created by CloudTrail?

    1. S3 Server-side Encryption (SSE)

    2. S3 KMS-Managed Keys (KMS)

    3. S3 MFA Delete

    4. Customer-Managed Keys

  15. Which of the following is not included as part of an event
    associated with an activity logged by CloudTrail?

    1. Who made the request

    2. The parameters for the action requested

    3. The username of the requestor

    4. The response returned by the requested service

  16. You have turned on SSE-KMS encryption for your CloudTrail
    log files. What additional step do you need to make for
    processing those log files in another application?

    1. Set up a decryption pipeline using Lambda.

    2. Turn on Automatic Decryption in AWS CloudTrail.

    3. Upload your KMS key to AWS CloudTrail.

      CHAPTER 5

    4. You do not need to take any steps because logs are
      automatically decrypted.

  17. You want to ensure that no changes are made to your security
    groups and network access control lists (NACLs) across your
    account. What services would you use to create an alarm if
    someone tried to use the CLI to modify or delete a security
    group or NACL? (Choose two.)

    1. SNS

    2. AWS Config

    3. AWS CloudTrail

    4. AWS CloudWatch

CHAPTER 6


self-healing. It also automatically handles backups (like all Amazon
RDS instances if so configured) and replicates across a minimum of
three availability zones. Further, it offers specific versions that are
drop-in replacements for MySQL and PostgreSQL.


Review Questions

You can find the answers in the Appendix.


  1. Which of the following does Amazon RDS make most easy?

    1. Scalability of databases

    2. Elasticity of databases

    3. Automated scalability of data access

    4. Network access to databases

  2. How is Amazon RDS similar to an Auto Scaling group?

    1. Both Amazon RDS and Auto Scaling policies will add
      instances in response to increased demand.

    2. Both Amazon RDS and Auto Scaling policies will fire off
      alerts when traffic thresholds are reached related to usage.

    3. Both Amazon RDS and Auto Scaling policies provide
      elasticity to your applications.

    4. None of these are true.

  3. Which of the following statements is not accurate regarding
    databases created using Amazon RDS?

    1. Database utilization will never hit 100 percent due to
      Amazon RDS managing database instances.

    2. Database instances require sizing by the customer rather
      than being handled automatically by Amazon RDS.

    3. A portion of your Amazon RDS charges are related to the
      size of the database instance you have chosen.

      CHAPTER 6

    4. Amazon RDS makes database provisioning significantly
      simpler than manually installing a database on an instance.

  4. When does Amazon RDS patch your managed database
    instances?

    1. Once a month

    2. Every time any new software patch is available

    3. Every time a patch related to security or instance reliability
      is available

    4. Never; you are responsible for instance patching.

  5. How can you restrict access to an Amazon RDS instance?
    (Choose two.)

    1. By using IAM roles to limit resources’ access to the
      database instance

    2. By using NACLs to limit access to the VPC in which the
      database resides

    3. By setting user permissions on the database running on
      the instance

    4. By using a bastion host to limit direct access to the
      database instance

  6. Which of the following are backup methods supported by
    Amazon RDS? (Choose two.)

    1. Automated hourly snapshots

    2. Automated daily snapshots

    3. User-initiated snapshots at any time

    4. User-initiated snapshots in the set maintenance period for
      your database instance

  7. What is the default Amazon RDS backup retention period?

    1. 3 days

      CHAPTER 6

    2. 7 days

    3. 10 days

    4. This value is set at instance creation.

  8. Which of the following is not true about read replicas?

    1. Replication occurs asynchronously.

    2. Backups are configured on the replicas by default.

    3. A replica can be promoted to become a primary instance.

    4. A read replica can be created in the same availability zone
      as the primary instance.

  9. Which of the following is not true about a multi-AZ
    configuration?

    1. Replication occurs synchronously.

    2. Backups are configured on the standby instance by default.

    3. A standby instance can be promoted to become a primary
      instance.

    4. A standby instance can be created in the same availability
      zone as the primary instance.

  10. Which of the following is not true about a multi-AZ
    configuration?

    1. Replication occurs asynchronously.

    2. Backups are configured on the standby instance by default.

    3. A standby instance can be promoted to become a primary
      instance.

    4. Replication occurs synchronously.

  11. When you're using a multi-AZ setup, if the primary database
    instance becomes unreachable, which of the following happens
    automatically? (Choose two.)

    CHAPTER 6


    1. The DNS CNAME is changed to point at the standby
      instance.

    2. Backups switch from the standby instance to the primary
      instance.

    3. The standby instance is promoted to become the primary
      instance.

    4. The primary instance is restarted.

  12. Which of the following is the best solution for reducing the
    read workload on a database instance?

    1. Add read replicas to the database.

    2. Add a multi-AZ configuration to the database.

    3. Create a new database in a second region and set up
      replication between the original and the new database.

    4. Create a new database in a second availability zone and set
      up replication between the original and the new database.

  13. Which of the following are allowed options for the deployment
    of a read replica?

    1. The same availability zone as the primary instance

    2. A different region than the primary instance

    3. The same region as the primary instance

    4. All of the above

  14. Which of the following are allowed options for the deployment
    of a secondary instance of a multi-AZ setup?

    1. The same availability zone as the primary instance

    2. A different region than the primary instance

    3. The same region as the primary instance

    4. All of the above

      CHAPTER 6

  15. Which of the following would be good uses for read replicas?
    (Choose two.)

    1. Database instances for a website with high volume
      displaying items for sale

    2. Database instances for a data warehouse focused on
      reporting

    3. Database instances for a website with high volume adding
      users to a mailing list

    4. Database instances to ensure that if network connectivity
      is lost, applications will continue to run

  16. What is the largest sized table you can have on Amazon
    Aurora?

    1. 16 TB

    2. 32 TB

    3. 64 TB

    4. 128 TB

  17. For which database engines can Amazon Aurora drop in as a
    direct replacement? (Choose two.)

    1. MariaDB

    2. SQL Server

    3. MySQL

    4. PostgreSQL

  18. Which of the following will AWS automatically handle when
    you use RDS? (Choose two.)

    1. Patching the database server

    2. Optimizing queries received by the RDS instance

    3. Creating backups compliant with long-term retention
      requirements of your organization

      CHAPTER 6

    4. Taking point-in-time backups periodically

  19. You are using Amazon RDS instances for your production and
    development environments. Both instances are running on
    db.t3.small instances. Lately, though development has been
    operating with no issues, the production environment is
    showing increased performance degradation, especially when
    writing new data to the production instance. What change
    would you consider to fix this issue?

    1. Set up ElastiCache in front of the production database to
      cache requests.

    2. Upgrade the production database to use a larger instance
      type.

    3. Set up read replicas on the production instance.

    4. Provision additional network bandwidth to the production
      database.

  20. In a failover scenario from one RDS instance to another
    instance, using a multi-AZ setup, which of the following does
    not occur?

    1. All requests are re-routed to the new instance from the
      failed instance.

    2. DNS entries are pointed to the new instance.

    3. The IP address of the active instance can change.

    4. In-progress activity with the failing database completes
      before cutting over to the new instance.

CHAPTER 7




  1. Set Is to <= 40%.

  2. Set For At Least to 2 Consecutive Period(s) of 5
    Minutes.

  3. Assign the name of the alarm: Low-CPU-Utilization.

  4. Click Create Alarm.

  1. For Take The Action, select Remove One Instance When 40

    >= CPUUtilization > -Infinity.

  2. Click Next: Configure Notifications.

  3. Click Next: Configure Tags.

  4. Click Review.

  5. Click Create Auto Scaling Group.

  6. Click Close once the wizard completes.


If you had no Amazon EC2 instances created before these
exercises, you will find that you now have one instance that is
being spun up. This is due to the fact that the minimum capacity
was set to 1.

When you clean up after this exercise, remember to delete the
Auto Scaling group first, and then terminate the instance. If you
terminate the instance first, the Auto Scaling group will kick off
another instance because it no longer has the minimum capacity
met.



Review Questions

You can find the answers in the Appendix.


  1. Which of the following can an EC2 Auto Scaling group contain?

    1. On-demand instances

    2. Spot instances

CHAPTER 7

C. Containers

D. A and B but not C

  1. Which of the following are part of a launch configuration?
    (Choose two.)

    1. AMI ID

    2. EBS volume mapping

    3. NFS mount points

    4. IAM group for connectivity

  2. An Auto Scaling group has a minimum of 2, a maximum of 5,
    and a desired capacity of 3. How many instances are running in
    the group if the network has reached peak capacity in the VPC
    in which the group is running?

    1. 2

    2. 5

    3. 3

    4. There is not enough information to answer this question.

  3. Why might you choose a launch template over a launch
    configuration? (Choose two.)

    1. You want to create the template directly from an existing
      EC2 instance.

    2. You want to create copies of the template that share key
      information but differ in slight ways.

    3. You want to use both on-demand and spot instances in
      your Auto Scaling group.

    4. You want to have a group with multiple launch templates.

  4. You have an Auto Scaling group serving EC2 instances running
    web servers. You have set CloudWatch to monitor network
    traffic and, at a threshold of 80 percent, to scale the group up.

    CHAPTER 7


    Which of the parameters of the Auto Scaling group will change
    when your CloudWatch trigger executes?

    1. Minimum

    2. Maximum

    3. Desired Capacity

    4. ScaleBy

  5. Which of the following is not possible to specify as part of a
    launch template?

    1. Security group

    2. Key pair

    3. AMI ID

    4. Target availability zone

  6. Which of the following is not a required parameter for a launch
    template?

    1. Security group

    2. Key pair

    3. AMI ID

    4. None of these are required.

  7. You are responsible for a high-value web application that
    should be “always available.” It is currently supported by an
    Auto Scaling group running with a desired capacity of 50
    instances. Based just on this information, and the need to
    ensure responsiveness of the application, what Auto Scaling
    policy would you likely implement?

    1. Simple scaling

    2. Dynamic scaling using ExactCapacity

    3. Dynamic scaling using ChangeInCapacity

    4. Dynamic scaling using PercentChangeInCapacity

      CHAPTER 7

  8. You have an Auto Scaling group of EC2 instances set up and
    serving web content. Web traffic increases and additional
    instances are created through Auto Scaling, but no traffic is
    going to those instances. Which of the following could result in
    this behavior?

    1. The new instances have been launched with a different key
      pair than the existing instances.

    2. The next instances have been launched in a different
      availability zone than the existing instances.

    3. The new instances have been launched with a different
      security group than the existing instances.

    4. The new instances have not yet had time to completely
      start; just wait a little longer.

  9. You have an Auto Scaling group that has been functioning quite
    well until recently. You learn that thousands of new customers
    have been introduced to the hosted application lately, and they
    typically access the application between 4 and 8 p.m. During
    these hours, the application's performance suffers for all users.
    What changes could you make to your Auto Scaling policy to
    restore performance? (Choose two.)

    1. Set up a scheduled scaling policy to increase the desired
      capacity significantly at 4 p.m. and reduce the desired
      capacity back down at 8 p.m.

    2. Set up a dynamic scaling policy with a large value for the
      capacity percentage to increase by.

    3. Investigate using CloudFront to provide caching to the data
      used in the application.

    4. Increase the maximum value for the Auto Scaling group.

  10. What is the default cooldown period for an EC2 Auto Scaling
    group?

    1. 2 minutes

      CHAPTER 7

    2. 5 minutes

    3. 8 minutes

    4. There is no default cooldown period.

  11. 12.

  12. Which of the following does a launch template offer but a
    launch configuration does not?

    1. The ability to specify a key pair for new instances

    2. The ability to version a specific launch setup

    3. The ability to specify a security group for new instances

    4. The ability to back up a specific launch setup

  13. Which of the following is a common reason for an Auto Scaling
    group not scaling out fast enough to handle a large increase in
    demand? (Choose two.)

    1. The cooldown period is too long.

    2. The cooldown period is too short.

    3. The step size for scaling out is too small.

    4. The step size for scaling in is too small.

  14. Which of the following is a good reason to use a launch
    template instead of a launch configuration? (Choose two.)

    1. You want to use on-demand instances in your Auto Scaling
      group.

    2. You want to use spot instances in your Auto Scaling group.

    3. You want to use T2 instances in your Auto Scaling group.

    4. You want to use reserved instances in your Auto Scaling
      group.

  15. Which of the following is not a default in an EC2 Auto Scaling
    group?

    CHAPTER 7


    1. A cooldown period of 300 seconds

    2. Health checks on running instances

    3. Automatic startup of a new instance when a running
      instance fails

    4. Automatic restarting of instances that fail health checks

  16. When does the first health check on a new instance within an
    Auto Scaling group take place?

    1. As soon as the instance starts

    2. As soon as the cooldown period ends

    3. As soon as the instance enters the InService state

    4. An indeterminate time after the instance starts but before
      the cooldown period ends

  17. You are running an Auto Scaling group with both on demand
    and spot instances. You are seeing what appear to be random
    shutdowns of instances in the group. You cannot find any failed
    health checks or triggered scaling events, and instances are
    started to replace the shutdown instances. What might be
    causing these shutdowns?

    1. The health check is configured incorrectly.

    2. You have a process on the shutting down instance that is
      locking up the processor on the instance.

    3. The instances are spot instances and the spot price has
      changed beyond tolerance.

    4. The instances are spot instances and Auto Scaling groups
      often recycle spot instances to keep costs low.

  18. When you put an InService instance into Standby, which of the
    following would happen? (Choose two.)

    1. Health checks of the instance stop.

      CHAPTER 7


    2. The desired capacity of the Auto Scaling group is decreased
      by 1.

    3. Another instance is launched to replace the Standby

      instance.

    4. The minimum of the Auto Scaling group is decreased by 1.

  19. You have 3 instances in availability zone 1, 2 in availability zone
    2, and 4 in availability zone 3. There are no spot instances being
    used, and no instances are protected. An instance in availability
    zone 1 is closest to the next billing hour, and an instance in
    availability zone 2 is using the oldest launch configuration. On
    a scale-in event, which instance would be terminated first?

    1. The instance in availability zone 1 closest to the next billing
      hour

    2. The instance in availability zone 2 using the oldest launch
      configuration

    3. The instance in availability zone 3 that has the oldest
      launch template, launch configuration, or is closest to the
      next billing hour (in that order of precedent)

    4. There is not enough information to know.

  20. Which of the following termination policies might not always
    be applicable to an Auto Scaling group? (Choose two.)

    1. OldestInstance

    2. OldestLaunchTemplate

    3. ClosestToNextInstanceHour

    4. AllocationStrategy

CHAPTER 8



image

Figure 8.11 Once you are successfully authenticated on
your Linux/MacOS system, you will see the EC2 splash
screen.


ssh <public_ip_address> -i <keypair_file>



Review Questions

You can find the answers in the Appendix.


  1. VPCs that are peered have connection names that look like
    which of the following?

    A. vpc-11112222

    B. pcx-11112222

    C. 11112222-pcx

    D. pcx-vpc1vpc2

  2. Which of the following does a bastion host provide to a private
    VPC?

    1. Access to the resources in the VPC through a host outside
      the VPC

CHAPTER 8

  1. Access to the resources in the VPC through a host inside
    the VPC

  2. Access to the public resources in the VPC through
    assigning each an elastic IP address

  3. Access to the private resources in the VPC through
    assigning each an elastic IP address

  1. Which of the following are good security practices for a bastion
    host? (Choose two.)

    1. Set up Multi-Factor Authentication on the bastion host.

    2. Use a security group that limits traffic on port 80 to the
      bastion host.

    3. Use an administrative key pair for access to the bastion
      host.

    4. Whitelist a known set of addresses for access to the bastion
      host.

  2. Does VPC peering have any effect on cost in your
    environments?

    1. Yes, if you have traffic flowing from one VPC to another,
      that traffic will not leave the AWS network and use your
      peering connection.

    2. Yes, if your users are downloading from a peered VPC, no
      egress costs will be incurred.

    3. Yes, if your users are accessing hosts in two VPCs that are
      peered, those two hosts will not incur CPU usage costs.

    4. No, VPC peering has no effect on cost.

  3. Can two VPCs in different regions be peered?

    1. Yes, if the two VPCs are in the same AWS account.

    2. Yes, regardless of what accounts the VPCs are in.

    3. Yes, if the two VPCs are not peered to any other VPCs.

      CHAPTER 8


    4. No, VPCs cannot be peered across regions.

  4. Which of the following is a limitation of interregion VPC
    peering? (Choose two.)

    1. IPv6 traffic cannot flow across the connection.

    2. IPv4 traffic cannot flow across the connection.

    3. There is no support for jumbo frames.

    4. Spot instances cannot be in the peered VPCs.

  5. Which of the following are true of bastion hosts? (Choose two.)

    1. They must reside in a private subnet.

    2. They must reside in a public subnet.

    3. They must have an elastic IP address.

    4. They must have a public IP address.

  6. Which of the following is not allowed in two peered VPCs?

    1. The two VPCs cannot have multiple public IP addresses.

    2. The two VPCs cannot have multiple elastic IP addresses.

    3. The two VPCs cannot have overlapping CIDR blocks.

    4. The two VPCs cannot have IPv6 addresses.

  7. Which of the following are best practices for bastion hosts?
    (Choose two.)

    1. Security groups should be used to restrict access to the
      hosts.

    2. The host should have a non-elastic IP address.

    3. The host should be in a VPC that is peered to at least one
      other VPC.

    4. The host should be in an Auto Scaling group for high
      availability.

      CHAPTER 8

  8. VPC A is peered with VPC B and VPC C. Can traffic flow across
    these peered connections from VPC B to VPC C?

    1. No. This is not allowed.

    2. Yes, as long as the traffic is IPv4 and not IPv6.

    3. Yes, as long as the traffic is IPv6 and not IPv4.

    4. No, unless the traffic is smaller than 64KB.

  9. VPC A is peered with VPC B and VPC C. Can traffic flow across
    these peered connections from VPC B to a host in VPC A, and
    then in a second transmission from that host in VPC A to VPC
    C?

    1. No. This is not allowed.

    2. Yes, as long as the traffic is IPv4 and not IPv6.

    3. Yes, as long as the traffic is IPv6 and not IPv4.

    4. Yes, this is allowed.

  10. How many VPCs can connect to a shared services VPC?

    1. There is no set limit.

    2. 25

    3. 5

    4. There is a default limit of 125, but this limit can be raised
      on request.

  11. What hardware is required to set up a VPC peering connection
    between two VPCs in the same AWS account?

    1. A customer gateway

    2. An Internet gateway

    3. A virtual private gateway

    4. No hardware is required.

      CHAPTER 8


  12. What is the main difference between a bastion host and a NAT
    device?

    1. A bastion host allows traffic into a private VPC whereas a
      NAT device allows traffic out to the Internet.

    2. A bastion host relies on NACLs for security whereas a NAT
      device relies on security groups.

    3. A bastion host should be in a public VPC whereas a NAT
      device should be in a private VPC.

    4. A bastion host should be in a private VPC whereas a NAT
      device should be in a public VPC.

  13. You have just inherited a new network architecture that has a
    private VPC with numerous resources within it and a bastion
    host for administrative access. Which of the following would
    you do first?

    1. Set up MFA on the hosts in the private VPC.

    2. Remove any Internet gateways on the private VPC.

    3. Whitelist any IPs that need to access the bastion host.

    4. Set up logging on all shell activity on the bastion host.

  14. Which of the following protocols would typically be allowed to
    access your bastion host? (Choose two.)

    1. SSH

    2. HTTP

    3. HTTPS

    4. RDP

  15. You have a peering connection between VPC A and VPC B.
    Additionally, VPC B has a hardware VPN connection with your
    internal corporate network. You are trying to communicate
    from VPC A to the internal network but connections are being
    refused. What is the most likely issue?

    CHAPTER 8

    1. You need to set up a peering connection between VPC B
      and your internal network using the VPN connection.

    2. You need to ensure that route propagation is turned on in
      VPC B.

    3. You need to ensure that route propagation is turned on in
      VPC A.

    4. This is an example of edge-to-edge routing and is
      disallowed by AWS.

  16. You have just been put in charge of a network configuration
    described as using the hub-and-spoke model. There are five
    total VPCs. How many peering connections would you expect to
    find?

    1. 3

    2. 4

    3. 5

    4. It is impossible to answer this question without more
      information.

  17. VPC A has a logging aggregator within it. VPC B has a web
    server and VPC C has an application server, both of which log
    events. VPC D has software that can visualize log data. How
    would you connect these VPCs?

    1. Peer VPC A to VPC D, and peer both VPC B and C to D. Log
      data within each VPC and visualize it using the software in
      VPC D.

    2. Peer VPCs B, C, and D to VPC A. Have VPCs B and C send
      log data to VPC A, and VPC D connect to VPC A to load and
      visualize data.

    3. Peer VPC D to both VPC B and C, and then pair VPC B and
      C to VPC as well. Log data to VPC A, and use the existing
      peering connections to deliver that data to VPC D for
      visualization.

      CHAPTER 8

    4. Peer VPC B to VPC C, and VPC C to VPC A. Route all logs
      from both B and C into A. Then peer VPC D to VPC A to
      visualize the log data.

  18. What additions to your route table would you expect to need to
    add in a situation where you have two VPCs peered?

    1. Destination IPs for IPs within the peered VPC would have
      a target of the VPC peering connection (pcx-11112222, for
      example).

    2. Destination IPs for IPs within the source VPC would have a
      target of the VPC peering connection (pcx-11112222, for
      example).

    3. Destination IPs for IPs within the peered VPC would have
      a target of the CIDR block within the peered VPC
      (10.0.0.0/28, for example).

    4. Destination IPs for IPs within the source VPC would have a
      target of the CIDR block within the peered VPC
      (10.0.0.0/28, for example).

CHAPTER 9



  1. Under Patching Schedule, choose Schedule In A New
    Maintenance Window.

  2. Choose Use A CRON Schedule Builder, and set the window
    to run every Saturday at 11:00 p.m.

  3. Set the maintenance window duration to 4 hours.

  4. Give the maintenance window a name. In my example, I
    used
    Prod-EC2-Instances.

  5. For Patching Operation, choose Scan And Install.

  6. Click Configure Patching.


You will be dropped into the Patch Baselines area. From here,
you can configure baselines and approve patches. The patches
that you approve will be installed during the maintenance
window that you have specified.

  1. Click the patch baseline that corresponds to your instance
    types. For this example, I chose AWS-
    AmazonLinuxDefaultPatchBaseline.

  2. Click Actions and then click Modify Patch Groups.

  3. In the Patch Groups box, enter the name of the patching
    configuration that you created earlier. Then click Add.

  4. Click Close.



Review Questions

You can find the answers in the Appendix.


  1. Which of the following does AWS Systems Manager not

provide?

A. Patching automation

CHAPTER 9

  1. Software installation

  2. Software configuration

  3. Critical vulnerability notification

  1. Which of the following AMIs will not automatically have AWS
    Systems Manager installed?

    1. A Windows 7 AMI from the Amazon Marketplace

    2. A Windows 2000 AMI from the Amazon Marketplace

    3. A Linux AMI from the Amazon Marketplace

    4. A macOS AMI from the Amazon Marketplace

  2. You have a number of instances based on AMIs with AWS
    Systems Manager agent installed, but none are able to
    communicate to the SSM service. What is likely the source of
    this issue?

    1. You need to create an IAM group and assign that group to
      each instance you want communicating with AWS Systems
      Manager.

    2. You need to create an IAM role and have each instance
      assume that role to communicate with the AWS Systems
      Manager service.

    3. You need to add the AWSSystemsManager policy to each
      instance running an SSM agent.

    4. You need to use a Linux-based AMI on each instance to
      ensure it can communicate with the SSM service.

  3. Which of the following policies is required for an SSM agent on
    an instance to communicate with the AWS Systems Manager
    service?

    1. AmazonEC2RoleforSSM

    2. AmazonEC2RoleforASM

    3. AWSEC2RoleforAWSSM

      CHAPTER 9

    4. AWSEC2RoleforSSM

  4. Which of the following has the capability to be a managed
    instance? (Choose two.)

    1. An on-premises server

    2. An EC2 instance running in AWS

    3. A container running in AWS via ECS

    4. A Lambda triggered by an API; gateway running in an AWS
      VPC

  5. Which of the following are valid ways to filter or organize a
    resource group? (Choose two.)

    1. By resource tag

    2. By AWS account number

    3. By IAM role

    4. By environment

  6. Which of the following is a limitation of a resource group?

    1. They cannot contain resources based on resource tag.

    2. They cannot contain resources based on environment.

    3. They cannot contain resources in different regions.

    4. They cannot query resources based on a specific tag.

  7. Which of the following are supported document types within
    AWS Systems Manager? (Choose two.)

    1. Command document

    2. Role document

    3. Policy document

    4. Resource document

  8. Which of the following are supported notation formats for
    documents in AWS Systems Manager? (Choose two.)

    CCHHAAPPTTEERR 9 9

    1. YAML

    2. JSON

    3. CSV

    4. Text

  9. Which of the following document types can be used by State
    Manager?

    1. Policy documents

    2. Automation documents

    3. Command documents

    4. All of the above

  10. With which command does a command document typically
    deal?

    1. The Run command

    2. The Patch command

    3. The Halt command

    4. The Update command

  11. Which of the following encryption options is supported by
    session manager?

    1. CMKs

    2. KMS

    3. Customer-provided keys

    4. CMS

  12. Which of the following can State Manager help to enforce?
    (Choose two.)

    1. Messaging

    2. Inventory

      CHAPTER 9

    3. Security

    4. Compliance

  13. Which of the following are designed to work with the
    Parameter Store? (Choose two.)

    1. GitHub

    2. AWS CodeBuild

    3. AWS CodePipeline

    4. AWS CodeDeploy

  14. You are responsible for a fleet of EC2 instances and have heard
    that a recently released patch has known issues with Rails,
    which your instances are all running. How would you prevent
    the patch from being deployed to the instances, given that they
    are all running the SSM agent?

    1. Remove the patch from the automation pipeline.

    2. Remove the patch from the patch baseline.

    3. Add the patch as an exclusion to the patch baseline.

    4. Add the path as an exclusion to the automation pipeline.

  15. Which of the following is possible to do in an AWS Systems
    Manager maintenance window? (Choose two.)

    1. Execute AWS Lambda functions

    2. Update patches

    3. Remove a bad patch

    4. Restart an instance

  16. You have a command document written in JSON for your
    instances running a Windows AMI and communicating with
    the AWS Systems Manager Service. You now have inherited
    several Linux-based instances and want to use the same

    CHAPTER 9


    command document. What do you need to do to use this
    document with the Linux instances?

    1. Convert the document from JSON to YAML and reload it.

    2. Copy the document and assign the copy to the Linux-based
      instances.

    3. You cannot use a document written for Windows-based
      instances with Linux-based instances.

    4. Nothing; documents will work across platform operating
      systems.

  17. Your organization has mandated that all code running on your
    macOS EC2 instances must either be part of an approved AMI
    or open source. You have been using the AWS Systems
    Manager agent on your instances. What will you need to do to
    ensure compliance with this new policy?

    1. You will need to remove the agent and reinstall it using the
      Open Source option within the agent's installation script.

    2. You will need to remove the Systems Manager agent and
      find another option.

    3. Nothing; the Systems Manager agent is part of the default
      macOS AMI in AWS.

    4. Nothing; the Systems Manager agent is open source and
      available on GitHub.

  18. You need to ensure that a compliance script is executed on all
    of your managed instances every morning at 1 a.m. How would
    you accomplish this task?

    1. Create a new Execute command and use Systems Manager
      to set it up on your instances.

    2. Create a new Run command and use Systems Manager to
      set it up on your instances.

      CHAPTER 9

    3. Create a new compliance policy document and ensure that
      all instances’ agents reference the document.

    4. Create a new action document and ensure that all
      instances’ agents reference the document.

  19. Which of the following are ways to customize the default
    patching procedures used by AWS Systems Manager Patch
    Manager? (Choose two.)

    1. Write a custom Run command to install patches on your
      own schedule.

    2. Write an automation document describing your preferred
      patching levels and schedule.

    3. Write your own AWS Systems Manager command to refine
      the default automation.

    4. Write a policy document for each instance you want
      customized.

CHAPTER 10


Review Questions

You can find the answers in the Appendix.


  1. You are responsible for a team of engineers storing large
    documents in S3, often 10 GB or larger. The team has begun to
    receive the following error: “Your proposed upload exceeds the
    maximum allowed object size.” What change should you make
    to resolve this issue?

    1. Use a different S3 bucket for additional uploads.

    2. Request a change to the maximum upload size by
      contacting AWS support.

    3. Switch to using S3-IA, which supports larger file uploads.

    4. Select the Multipart Upload option for all of the uploaded
      documents and ensure their code uses this API.

  2. You have an S3 bucket named compliance_docs in the US East 2
    region and created a folder at the root level of the bucket called
    nist/. You've turned on website hosting and asked your content
    team to upload documents to the
    nist/ folder. At what URL
    will these documents be available through a web browser?

    1. https://compliance_docs.s3-website-us-east-
      2.amazonaws.com/nist

    2. https://s3-website-us-east-
      2.amazonaws.com/compliance_docs/nist

    3. https://s3-us-east-
      2.amazonaws.com/compliance_docs/nist

    4. https://compliance_docs.s3-website.us-east-
      2.amazonaws.com/nist

  3. For which of the following HTTP methods does S3 have
    eventual consistency? (Choose two.)

    1. PUTs of new objects

      CHAPTER 10

    2. UPDATEs

    3. DELETEs

    4. PUTs that overwrite existing objects

  4. What is the smallest file size that can be stored on standard
    class S3?

    1. 1 byte

    2. 1 MB

    3. 0 bytes

    4. 1 KB

  5. You've just created a new S3 bucket named rasterImages in the
    US East 2 region. You need the URL of the bucket for some
    programmatic access. What is the correct bucket URL?

    1. https://s3-us-east-2-rasterImages.amazonaws.com/

    2. https://s3-east-2.amazonaws.com/rasterImages

    3. https://s3-us-east-2.amazonaws.com/rasterImages

    4. https://amazonaws.s3-us-east-2.com/rasterImages

  6. You want to store your documentation in S3 and have it easily
    and quickly available. However, you also are concerned that
    some documents are not accessed except in bursts. When a
    document is accessed, it is usually accessed multiple times by
    the same team. Which S3 storage class would be a good option
    here?

    1. S3 Standard

    2. S3 Intelligent-Tiering

    3. S3 One Zone-IA

    4. Glacier

  7. What availability does S3 Standard storage provide?

    1. 99.99 percent

      CHAPTER 10

    2. 99.9 percent

    3. 99.5 percent

      D. 99.999999999 percent

  8. What durability does S3 Standard-IA storage provide?

    1. 99.99 percent

    2. 99.9 percent

    3. 99.5 percent

      D. 99.999999999 percent

  9. What availability does S3 One Zone-IA storage provide?

    1. 99.99 percent

    2. 99.9 percent

    3. 99.5 percent

      D. 99.999999999 percent

  10. If a document is stored in S3 Standard-IA, how many
    availability zones is that document replicated across?

    1. 1

    2. 2

    3. 3

    4. At least 3, but may be more

  11. When a new S3 bucket is created, who can access that bucket
    without any additional permission changes?

    1. Only the bucket creator

    2. Any users with the S3AllBucket policy

    3. The bucket creator and all administrative users

    4. The bucket creator and anyone in the same IAM groups or
      roles as the bucket creator

      CHAPTER 10


  12. Which of the following are valid ways to limit and control
    access to S3 resources? (Choose two.)

    1. IAM policies

    2. KMS

    3. Access keys

    4. Access control lists

  13. Which of the following are valid ways to encrypt data on S3?
    (Choose two.)

    1. SSE-IAM

    2. SSE-S3

    3. SSE-KMS

    4. Amazon Client Encryption Toolkit

  14. If you need to encrypt resources in S3 but require complete
    control of your keys, which option for encryption would you
    use?

    1. SSE-KMS

    2. Amazon S3 Encryption Client

    3. SSE-S3

    4. SSE-C

  15. Which of the following are actual differences between Amazon
    Glacier and Amazon Glacier Deep Archive? (Choose two.)

    1. Amazon Glacier Deep Archive is less expensive than
      Amazon Glacier.

    2. Amazon Glacier Deep Archive is faster to retrieve files
      from than Amazon Glacier.

    3. Amazon Glacier Deep Archive has fewer access options
      than Amazon Glacier.

      CHAPTER 10

    4. Amazon Glacier Deep Archive is more expensive than
      Amazon Glacier.

  16. Which of the following is a good reason to use S3 Intelligent-
    Tiering for an S3 bucket? (Choose two.)

    1. The bucket has data that is accessed only once a month.

    2. The bucket has unknown access patterns.

    3. The bucket has changing access patterns that are difficult
      to learn.

    4. The bucket has access patterns that change once each
      month.

  17. Which of the following statements is true? (Choose two.)

    1. S3 Standard and S3 One Zone-IA have the same durability.

    2. S3 Standard-IA and S3 One Zone-IA have the same
      availability.

    3. S3 Standard and S3 One Zone-IA have the same
      availability.

    4. S3 Standard-IA has greater availability than S3 One Zone-
      IA.

  18. In terms of performance, what does S3 Intelligent-Tiering most
    resemble?

    1. S3 Standard

    2. S3 Standard-IA

    3. S3 One Zone-IA

    4. Amazon Glacier

  19. What is the availability of S3 Intelligent-Tiering?

    1. 99.99 percent

    2. 99.9 percent

    3. 99.5 percent

      CHAPTER 10

    4. 99 percent

  20. Your organization has a large amount of compliance data stored
    in Amazon Glacier. For the next few weeks, your team needs to
    access this data frequently, but you do not want to move the
    data out of Glacier and then back in a month later. What should
    you do to speed up access to this data temporarily?

    1. Turn on S3 Lifecycle Management and set up a policy to
      move the data into S3 Standard and then back out again in
      a month.

    2. Select the Expedited option for data retrieval on Amazon
      Glacier.

    3. Select the Bulk option for data retrieval on Amazon
      Glacier.

    4. Set up a Lambda to pull all the data from Glacier and stage
      it on an EBS volume for the month.

CHAPTER 11



Exercise 11.3

Attach the Encrypted EBS Volume to an Amazon EC2 Instance

Now that we have an encrypted EBS volume, we will attach it to an existing
EC2 instance.

  1. Select Volumes from the EC2 Dashboard.

  2. Select the newly encrypted volume

  3. Click Actions Attach Volume.

  4. Click in the Instance field and select your EC2 instance from the drop-
    down list.

  5. Click Attach.


The instance ID and the drive mapping will now show up in the volume
attributes under Attachment Information, and the drive will be available for
use with your instance.



Exercise 11.4

Turn On Default EBS Encryption for Your Account

For the last exercise of this chapter, let's look at how to turn on default
encryption for all EBS volumes.

  1. Click where it says EC2 Dashboard.

  2. Under the Account Attributes section to the right, click Settings.

  3. Select the Always Encrypt New EBS Volumes check box, and then click
    Save Settings.

  4. Click Close.


Remember this only makes this change for the region that you are in, you will
need to change this setting for each region that you have EBS storage in.


Review Questions

You can find the answers in the Appendix.


  1. What does IOPS stand for?

    CHAPTER 11

    1. Input operations per second

    2. Input/output operations per second

    3. Input and output per second

    4. Input/output overhead per second

  2. Which EBS volume type has the highest maximum IOPS?

    1. General-purpose SSD

    2. Provisioned IOPS SSD

    3. Throughput-optimized HDD

    4. Cold HDD

  3. Which EBS volume type supports the largest volume size?

    1. General-purpose SSD

    2. Provisioned IOPS SSD

    3. Throughput-optimized HDD

    4. All volume types support the same maximum volume size.

  4. Which EBS volume type is best suited for a system boot volume?

    1. General-purpose SSD

    2. Provisioned IOPS SSD

    3. Throughput-optimized HDD

    4. Cold HDD

  5. Which EBS volume type is well suited for data warehousing?

    1. General-purpose SSD

    2. Provisioned IOPS SSD

    3. Throughput-optimized HDD

    4. Cold HDD

  6. Which EBS volume type is well suited for large database workloads?

    1. General-purpose SSD

    2. Provisioned IOPS SSD

    3. Throughput-optimized HDD

    4. Cold HDD

  7. Which of the following EBS volume types cannot be boot volumes? (Choose
    two.)

    CHAPTER 11

    1. General-purpose SSD

    2. Provisioned IOPS SSD

    3. Throughput-optimized HDD

    4. Cold HDD

  8. If you create an EBS volume type using the console, what type will that
    volume be by default?

    1. General-purpose SSD

    2. Provisioned IOPS SSD

    3. Throughput-optimized HDD

    4. Cold HDD

  9. You want to create a lot of EBS volumes and are not concerned about
    performance, but you're very concerned about cost. These volumes will need
    to be bootable. What is your best option?

    1. General-purpose SSD

    2. Provisioned IOPS SSD

    3. Throughput-optimized HDD

    4. Cold HDD

  10. If you create an EBS volume type using the console, what type will that
    volume be by default?

    1. General-purpose SSD

    2. Provisioned IOPS SSD

    3. Throughput-optimized HDD

    4. Cold HDD

  11. Which of the following are true about EBS snapshots? (Choose two.)

    1. They are incremental.

    2. They are stored on S3.

    3. They are available through the S3 API.

    4. EBS volumes are unmounted before snapshots are taken.

  12. You create a new snapshot from an encrypted EBS volume. What will the
    result be?

    1. An unencrypted snapshot of the encrypted volume

    2. An encrypted snapshot of the encrypted volume

      CHAPTER 11


    3. You will be able to create the snapshot only if you have the encryption
      keys to the original volume.

    4. You cannot create a snapshot from an encrypted volume.

  13. How do you create an encrypted snapshot from an unencrypted snapshot?

    1. Encrypt the unencrypted snapshot using the AWS Client Encryption tool.

    2. Create a snapshot of the original volume and encrypt that snapshot after
      it completes.

    3. Make a copy of the unencrypted snapshot and select the option to encrypt
      the copy.

    4. You cannot encrypt an unencrypted snapshot once it has been taken.

  14. You are consistently finding that EBS snapshots of your volumes do not
    contain all of the data that you are seeing reflected in applications that
    connect to those volumes. What could be the issue?

    1. You should make sure your EBS volumes are unmounted before taking
      snapshots.

    2. You should make sure that you stop any EC2 instances connected to your
      EBS volumes before taking snapshots.

    3. Your application may be caching content and not writing it to the EBS
      volume at the time of the snapshot.

    4. Your application may have written the data to the volume but the
      snapshot captures only data that has been on the volume for 60 seconds
      prior to snapshot.

  15. What type of encryption key is applied to encrypted EBS snapshots?

    1. A unique 128-bit AES key

    2. A unique 256-bit AES key

    3. A unique 512-bit AES key

    4. A shared key is used for encryption, but that key is 256-bit AES.

  16. You want to launch a new instance from an unencrypted snapshot, but you
    want the launched instance to be encrypted. How do you accomplish this?
    (Choose two.)

    1. You can select encryption of the instance during creation, regardless of
      the encryption status of the snapshot.

    2. You need to create the instance unencrypted and then encrypt it using the
      AWS Instance Encryption tool.

      CHAPTER 11

    3. You need to encrypt the snapshot and then launch the instance from the
      encrypted snapshot.

    4. You can't. You need to launch an encrypted instance from an encrypted
      snapshot.

  17. What happens to the data on an EBS volume when the instance it is mounted
    to terminates?

    1. If the EBS volume persists, the data on it will also persist.

    2. All data on the EBS volume is deleted when the instance is terminated.

    3. If the EBS volume is a boot drive, all data on it is deleted when the
      instance is terminated.

    4. Data is persisted on the EBS volume if Persist Data is checked when the
      volume is attached to the instance.

  18. What do you need to do to a root EBS volume if you want it to persist beyond
    the life of the EC2 instance booting from it?

    1. Set the Persist Data flag on the volume to Yes.

    2. Set the Live Past Instance flag on the volume to Yes.

    3. Set the Delete Data flag on the volume to No.

    4. Set the Delete on Termination flag on the volume to No.

  19. Which of the following allows you to change the capacity and performance of
    an in-use EBS volume?

    1. Change the volume type using the AWS Console.

    2. Change the volume type using the AWS CLI.

    3. Change the volume type using the AWS API.

    4. All of these

  20. It takes AWS about 3 minutes to snapshot your EBS volumes with
    approximately 2 TB of data on them. How long would you expect it to take to
    snapshot of 16 TB of data, given that the data is of the same type, on average?

    1. 24 minutes

    2. 5 minutes

    3. 3 minutes

    4. It is impossible to know given the information in the question.

CHAPTER 12


  1. Which of the following is not an AMI accessibility level?

    1. Public

    2. Private

    3. Protected

    4. Shared

  2. You have created a custom AMI and launched a number of
    instances into the US-West-1 region. Recently, you've been
    instructed to re-create the entire environment in US-East-2 for
    redundancy. What steps are required to use this AMI in US-
    East-2?

    1. None; AMIs are available to all regions as long as the same
      account is used.

    2. Ensure that the AMI is set to the Shared accessibility and it
      will be usable in US-East-2.

    3. Copy the AMI to US-East-2 and then it will be available in
      that region.

    4. AMIs cannot be used in multiple regions. You will need to
      create a new AMI in US-East-2.

  3. Which of the following are valid ways to obtain an AMI?
    (Choose two.)

    1. Create one yourself from an existing EC2 instance.

    2. Obtain one from the Global AMI Marketplace.

    3. Obtain one from the AWS Marketplace.

    4. Obtain one from a third-party vendor's GitHub repository.

  4. Which of the following are storage options to back an AMI?
    (Choose two.)

    1. Instance-backed AMI

    2. Volume-backed AMI

      CHAPTER 12


    3. EMS-backed AMI

    4. EBS-backed AMI

  5. Who can grant permissions for the use of a shared AMI?

    1. The AMI owner

    2. Anyone who already has permissions to use the AMI

    3. Anyone with the AmiAdmin policy attached to an IAM user,
      group, or role

    4. Anyone within an IAM group who has the AmiDistributor

      permission

  6. You have created a private AMI in your own AWS account. You
    have a coworker who wants to use the same AMI in their own
    development account. How can you allow this? (Choose two.)

    1. Grant permissions to the coworker to use the AMI.

    2. Convert the AMI to a shared AMI.

    3. Add the coworker to a group with the AmiDistributor

      permission.

    4. Set the permissions on the AMI to include the account that
      the coworker is using.

  7. You are responsible for setting up a new Auto Scaling group
    with a number of instances and want to choose the correct type
    of AMI storage. Which AMI storage is most appropriate for an
    Auto Scaling group, given that you expect the group to be quite
    volatile in terms of scaling in and scaling out?

    1. Instance-backed AMIs

    2. EBS-backed AMIs

    3. Transient-backed AMIs

    4. The storage used by an AMI is unrelated to the use of Auto
      Scaling groups, so any storage would be fine.

      CHAPTER 12

  8. Which of the following use cases would not be a good fit for an
    EBS-backed AMI instance?

    1. A database server running SQL Server

    2. Container-based applications

    3. An application that typically runs 24/7 for weeks at a time

    4. An instance dedicated to long-term data storage

  9. Who is billed when an AMI you create and share is launched
    within another user's account?

    1. You are billed, as well as the owner of the account in which
      the AMI is launched.

    2. Only you are billed, as you are the AMI owner.

    3. Only the owner of the account in which the AMI is
      launched is billed.

    4. Anyone who is using the AMI is billed.

  10. What happens when you copy an AMI to a new region? (Choose
    two.)

    1. The source API is available for immediate usage as is in the
      new region.

    2. An identical but distinct AMI is created in the new region.

    3. A new AMI is created with the same identifier as the
      source AMI.

    4. A new unique identifier is assigned to the new AMI.

  11. How can a deregistered AMI be used to start a new instance?

    1. You can start an instance from a deregistered AMI just as
      you would from a registered AMI.

    2. You have to re-register the AMI and then start the
      instance.

      CHAPTER 12

    3. You have to choose the Available For Launch option from
      the AWS Console on the deregistered AMI.

    4. It cannot.

  12. What options do you have for encrypting an EBS-backed AMI?
    (Choose two.)

    1. By using a KMS customer master key

    2. By using an SSE customer-provided master key

    3. By using a customer managed key

    4. You cannot encrypt an EBS-backed AMI.

  13. What action would you use to launch an EC2 instance from an
    AMI?

    1. The LaunchInstances action

    2. The RunInstances action

    3. The RunAMI action

    4. The LaunchAMI action

  14. By default, what encryption state is used when the

    RunInstances action is executed?

    1. The resulting instance is encrypted.

    2. The resulting instance is unencrypted.

    3. The resulting instance maintains the encryption state of
      the AMI's source snapshot.

    4. The resulting instance uses the encryption set as default in
      the AWS console.

  15. How can you ensure that an instance launched from an AMI
    based on an unencrypted snapshot is encrypted at all times?
    (Choose two.)

    1. Set the Encryption By Default setting to True.

      CHAPTER 12


    2. Supply an encryption parameter to encrypt when using the

      RunInstances action.

    3. Encrypt the instance after creation.

    4. Use a different AMI.

  16. How can you recognize an Amazon public image, as compared
    to non-Amazon images?

    1. Amazon images have a header of amazon-.

    2. Amazon images have an aliased owner, which will appear
      as
      amazon in the account field.

    3. Amazon images have names beginning with amazon-.

    4. You cannot reliably determine if an image is from Amazon.

  17. What do you need to do to share an AMI with specific AWS
    accounts?

    1. Make the AMI public.

    2. Add the AWS account IDs to the AMI's permissions.

    3. Add the AWS account owner IAM usernames to the AMI's
      permissions.

    4. Add the AWS IAM permission shared to the AMI's
      permissions.

  18. Within how many accounts can an AMI be used?

    1. 5

    2. 25

    3. 100 by default, but this limit can be raised upon request.

    4. Unlimited

  19. Which of the following are included when you copy a source
    AMI to a new region?

    1. Launch permissions

      CHAPTER 12

    2. User-defined tags

    3. Amazon S3 bucket permissions

    4. None of these

  20. You create a new AMI, and then copy it into a new account
    owned by your coworker. Who is the owner of the copied AMI?

    1. You are.

    2. Your coworker is.

    3. You and your coworker have joint ownership of the AMI.

    4. There is not enough information to answer.

CHAPTER 13



In a production environment, you would choose a specific S3
bucket rather than choosing All Resources as you did in step 5.
Once the role is created, you simply add it to the IAM Role drop-
down menu available when you create an EC2 instance or when
you choose to modify the instance.



Review Questions

You can find the answers in the Appendix.


  1. Which of the following are you responsible for?

    1. Security on the cloud

    2. Security of the cloud

    3. Security in the cloud

    4. Security beyond the cloud

      Users of AWS are responsible for security in the cloud,
      whereas AWS is responsible for security
      of the cloud.

  2. Which of the following is AWS responsible for? (Choose two.)

    1. Networking equipment

    2. Application authentication

    3. Networking port security

    4. Physical servers

  3. Which of the following are you responsible for? (Choose two.)

    1. Encrypting data

    2. Keeping operating systems on EC2 instances up-to-date

    3. Keeping operating systems on RDS instances up-to-date

    4. AWS datacenters

CHAPTER 13

  1. Which of the following is an example of shared responsibility
    between AWS and a user?

    1. Keeping RDS instances up-to-date

    2. Securing access to resources below the hypervisor

    3. Maintain EC2 instances at a host and a server level

    4. Application authorization

  2. How might encryption of data be considered an example of the
    Shared Responsibility Model?

    1. AWS maintains S3, whereas the user encrypts data that is
      stored on S3.

    2. AWS handles the actual mechanism of encrypting data,
      whereas the user chooses what data that encryption should
      apply to.

    3. AWS provides encryption requirements and the user
      implements those requirements.

    4. None of these are examples of shared responsibility.

  3. Which of the following are user types when considering an
    AWS account? (Choose two.)

    1. Account owner

    2. Root user

    3. IAM user

    4. IAM role

  4. What does the principle of least privilege mean?

    1. Users should have minimal privileges and only be granted
      additional privileges through IAM roles.

    2. Users should only gain privileges through group
      membership.

      CHAPTER 13

    3. Users should have the permissions they need to perform

      their duties, but nothing more than that.

    4. Users should have permissions to perform their duties and
      possible future duties, but nothing more than that.

  5. Which of the following are valid types of identifiers for IAM
    users? (Choose two.)

    1. Username

    2. Access key

    3. Secret key

    4. MFA

  6. For what purpose would an IAM user need to use a key pair?

    1. Accessing the AWS web console

    2. Accessing the AWS SDK

    3. Accessing the AWS CLI

    4. Accessing a running EC2 instance

  7. You have come on as an AWS consultant and need to audit
    software running on EC2 instances. There is no
    CloudFormation, so you need to examine each instance
    individually. What credential should you ask of your AWS
    administrator?

    1. An access key

    2. A username and password

    3. A key pair

    4. A secret key

  8. Which of the following is true of an IAM role but not an IAM
    group?

    1. Permissions can be granted through this mechanism.

    2. Users can be assigned multiples of each mechanism.

      CHAPTER 13

    3. Permissions assumed through this mechanism are
      temporary.

    4. All of these are true of both roles and groups.

  9. Which of the following would you apply to an EC2 instance that
    needs to communicate with a standard S3 bucket in the same
    region?

    1. An IAM group

    2. An IAM role

    3. An IAM policy

    4. All of these can provide an instance access to S3.

  10. To which of the following can you assign IAM policies?

    1. An IAM role

    2. An IAM user

    3. An IAM group

    4. All of these

  11. Which of the following is a difference between a managed and
    inline policy? (Choose two.)

    1. A managed policy can be attached to multiple users
      whereas an inline policy cannot.

    2. An inline policy can be attached to multiple users whereas
      a managed policy cannot.

    3. AWS recommends using inline policies rather than
      managed policies.

    4. AWS recommends using managed policies rather than
      inline policies.

  12. To what does the version of a policy refer?

    1. The date and time the policy was created

    2. The date and time the policy was updated

      CHAPTER 13

    3. An arbitrary identifier assigned by the policy author

    4. The version of the policy language used in the policy

  13. Which of the following are parts of a valid IAM policy? (Choose
    two.)

    1. Effect

    2. Sid

    3. Id

    4. Affect

  14. Which of the following is an acceptable entry for an IAM
    policy's principal? (Choose two.)

    1. Another policy's sid

    2. An IAM user

    3. An AWS account ID

    4. A federated user

  15. Why are access keys a potentially greater security risk than
    passwords? (Choose two.)

    1. They are long-lived compared to user passwords.

    2. They are not governed by password policies.

    3. They provide programmatic access to the AWS SDK or CLI.

    4. They expire every 90 days.

  16. When AWS uses the term access key, to which of the following
    are they referring? (Choose two.)

    1. A username

    2. A key pair

    3. An access key ID

    4. A secret access key

      CHAPTER 13

  17. What options does AWS KMS provide for key creation?
    (Choose two.)

    1. AWS KMS can generate keys.

    2. AWS KMS can read keys from another AWS account.

    3. AWS KMS allows you to import your own keys.

    4. AWS KMS can import keys from an existing AWS user.

CHAPTER 14



image

Figure 14.9 When a resource is noncompliant, it shows up on
the AWS Config dashboard.



Review Questions

You can find the answers in the Appendix.


  1. Which of the following is the most important usage of
    reporting and monitoring?

    1. Security

    2. Compliance

    3. Performance of applications

    4. All of the above

  2. Which AWS tool would you use to collect metrics from a
    running EC2 instance that has multiple EBS volumes attached?

CHAPTER 14

  1. AWS Config

  2. Amazon CloudWatch

  3. AWS CloudTrail

  4. All of the above

  1. You suspect that an application client is nonperformant
    because it is making more calls than normal to a REST-based
    API on your application estate. What AWS tool would you use
    to verify this information and validate any changes you make to
    correct this issue?

    1. AWS Config

    2. Amazon CloudWatch

    3. AWS CloudTrail

    4. AWS NetReporter

  2. You have a number of metrics collecting via Amazon
    CloudWatch on your fleet of EC2 instances. However, you want
    to gather additional metrics on a number of instances that do
    not seem to be performing as well as the majority of running
    instances. How can you gather additional metrics not available
    through Amazon CloudWatch's stock configuration?

    1. Turn on detailed monitoring.

    2. Install the Amazon CloudWatch Logs Agent.

    3. Create a new VPC flow log.

    4. Turn on detailed statistics in Amazon CloudWatch.

  3. How long does AWS CloudTrail retain information on API
    calls?

    1. 60 days

    2. 90 days

    3. 6 months

      CHAPTER 14

    4. 1 year

  4. Which of the following activities will not generate a
    management and/or data event in AWS CloudTrail?

    1. An AWS CLI call initiated by a developer

    2. A AWS SDK call initiated by Java code running in another
      cloud provider

    3. An interaction between an EC2 instance and RDS

    4. A login to the AWS web console

  5. Which of the following statements about an AWS CloudTrail
    trail with regard to regions is true? (Choose two.)

    1. A trail applies to all your AWS regions by default.

    2. A trail collects both management and data events.

    3. A trail can apply only to a single region.

    4. A trail applies to a single region by default.

  6. Which of the following is not an example of a management
    event?

    1. An AttachRolePolicy IAM operation

    2. An AWS CloudTrail CreateTrail API operation

    3. Activity on an S3 bucket via a PutObject event

    4. A CreateSubnet API operation for an EC2 instance

  7. How are management events different from data events?
    (Choose two.)

    1. Data events are typically much higher volume than
      management events.

    2. Data events are typically lower volume than management
      events.

    3. Data events are disabled by default when creating a trail,
      whereas management events are enabled by default.

      CHAPTER 14

    4. Management events include Lambda execution activity
      whereas data events do not.

  8. Which of the following options for a trail would capture events
    related to actions such as
    RunInstances or TerminateInstances?
    (Choose two.)

    1. All

    2. Read-Only

    3. Write-Only

    4. None

  9. Which of the following will not incur a charge for usage?

    1. The first copy of a management event

    2. The first copy of a data event

    3. The second copy of a data event

    4. The second copy of a management event

  10. How many different performance metrics can an Amazon
    CloudWatch alarm monitor?

    1. One

    2. Two

    3. One or more

    4. Amazon CloudWatch alarms do not monitor performance
      metrics.

  11. Which of the following is not a valid Amazon CloudWatch
    alarm state?

    1. OK

    2. INSUFFICIENT_DATA

    3. ALARM

    4. INVALID_DATA

      CHAPTER 14

  12. You have a CloudWatch alarm with a period of 2 minutes. The
    evaluation period is set to 10 minutes, and Datapoints To Alarm
    is set to 3. How many metrics would need to be outside the
    defined threshold for the alarm to move into an
    ALARM state?
    (Choose two.)

    1. Three out-of-threshold metrics out of five within 10
      minutes

    2. Three out-of-threshold metrics out of five within 2
      minutes

    3. Two out-of-threshold metrics out of five within 5 minutes

    4. Three out-of-threshold metrics out of eight within 16
      minutes

  13. Which of the following settings are allowed for dealing with
    missing data points within Amazon CloudWatch? (Choose
    two.)

    1. notBreaching

    2. invalid

    3. missing

    4. notValid

  14. Which of the following statements accurately describes a
    CloudWatch log stream?

    1. A collection of logs that share the same retention and
      monitoring settings

    2. A collection of logs that share the same IAM settings

    3. A collection of events from a single source

    4. A collection of events from a single VPC

  15. Which of the following does AWS Config not provide?

    1. Remediation for out-of-compliance events

      CHAPTER 14

    2. Definition of states that resources should be in

    3. Notifications when a resource changes its state

    4. Definition of compliance baselines for your system

  16. Which of the following would you use to ensure that your S3
    buckets never allow public access? (Choose two.)

    1. AWS Config

    2. Amazon CloudWatch

    3. AWS Lambda

    4. AWS CloudTrail

  17. Which of the following is not part of an AWS Config
    configuration item (CI)?

    1. An AWS CloudTrail event ID

    2. A mapping of relationships between the resource and other
      AWS resources

    3. The set of IAM policies related to the resource

    4. The version of the configuration item

  18. You want to ensure the minimum amount of time for any
    resource that moves out of compliance. You do not care about
    costs associated with configuration monitoring. What
    evaluation approach should you use for your config rules?

    1. Immediate

    2. Periodic

    3. Tagged

    4. Change-triggered

CHAPTER 15

  1. Which of the following are not types of assessments offered by
    Amazon Inspector? (Choose two.)

    1. Port assessments

    2. Network assessments

    3. VPC assessments

    4. Host assessments

  2. How does Amazon Inspector determine the rules to use in
    assessing your environment?

    1. Through AWS Config configuration items

    2. Through Trusted Advisor template settings

    3. Through Amazon Inspector assessment templates

    4. All of the above

  3. Which of the following require agents to be installed on your
    systems?

    1. Network assessments

    2. Host assessments

    3. Both network and host assessments

    4. Neither network nor host assessments

  4. You are concerned about open ports on your system. A previous
    administrator was known to neglect shutting off unused ports.
    Which AWS rules package might you use to determine if
    unused ports are still open?

    1. The CVE rules package

    2. The CIS Benchmarks package

    3. The Security Best Practices package

    4. The Runtime Behavior Analysis package

      CHAPTER 15


  5. Which of the following is not assessed by the Network
    Reachability rules package?

    1. VPC peering

    2. Route tables

    3. Virtual private gateways

    4. All of these are assessed by the Network Reachability rules
      package.

  6. Which of the following are types of activity that Amazon
    GuardDuty looks for? (Choose two.)

    1. Host compromise

    2. Instance compromise

    3. Account compromise

    4. Service compromise

  7. You have been tasked with securing your system against
    attacks. Specifically, your environments have been vulnerable
    in the past to vulnerability scans. Which of the following are
    areas into which you should look to protect from malicious
    vulnerability scans? (Choose two.)

    1. Open ports

    2. Dated operating systems

    3. Passwords that don't meet current password policies

    4. Misconfigured protocols

  8. You have instances running in three different AWS regions.
    You are running Amazon GuardDuty in each region. How many
    collections of security findings will you have?

    1. One; security findings are aggregated into the first region
      in which you set up Amazon GuardDuty.

      CHAPTER 15

    2. One; security findings are aggregated into a region of your
      choosing.

    3. Three; security findings are kept in the region to which
      they apply.

    4. Two; security findings are aggregated but kept in two
      regions for redundancy.

  9. You have responsibility for eight different AWS accounts. Each
    account has Amazon GuardDuty enabled. How many accounts
    will have security findings within them?

    1. Only the master account will have findings.

    2. Each account will have its own findings but the findings
      will be aggregated into the master account.

    3. Findings will remain in the account to which they apply.

    4. None of these

  10. Which of the following are analyzed by the Amazon GuardDuty
    service? (Choose two.)

    1. AWS CloudTrail

    2. AWS DNS logs

    3. Amazon CloudWatch

    4. Amazon Inspector

  11. You are responsible for setting up Amazon GuardDuty at a
    global firm with applications and instances running in every
    available AWS region. How can you set up GuardDuty so that
    all your security findings are aggregated into a single account?
    Which of the following tools would you need to use? (Choose
    two.)

    1. Amazon S3

    2. Amazon RDS

    3. Amazon CloudWatch

      CHAPTER 15

    4. Amazon Inspector

  12. Which of the following services is not available to be consumed
    and analyzed by Amazon GuardDuty?

    1. AWS CloudTrail

    2. AWS DNS logs

    3. VPC flow logs

    4. AWS EC2 instance logs

  13. You have set up an extensive network within AWS and are
    using Amazon GuardDuty to analyze VPC flow logs and DNS
    logs. You also have a requirement to maintain your VPC flow
    logs for at least 12 months. What do you need to meet this
    requirement?

    1. Nothing; Amazon GuardDuty will maintain those logs for
      two years automatically.

    2. Configure Amazon GuardDuty to maintain the VPC flow
      logs for 12 months rather than the default of 90 days.

    3. Amazon GuardDuty does not maintain logs; you'll need to
      use another AWS logging and monitoring service such as
      CloudWatch.

    4. Turn on log retention in Amazon GuardDuty and set the
      Keep value to 12 months.

  14. You need to delete all previous findings from Amazon
    GuardDuty and ensure the service is no longer running on your
    system. How can you stop GuardDuty and make sure findings
    are deleted?

    1. You cannot delete GuardDuty findings manually.

    2. Suspend the GuardDuty service, which will also delete all
      findings and configurations.

    3. Disable the GuardDuty service, which will also delete all
      findings and configurations.

      CHAPTER 15

    4. Tear down all instances and devices in the target region,
      turn off GuardDuty, and rebuild your environment.

  15. Which of the following is a means of viewing findings from
    Amazon GuardDuty? (Choose two.)

    1. AWS CloudWatch events

    2. Amazon Inspector

    3. The Amazon GuardDuty console

    4. The Amazon GuardDuty CLI

  16. Which of the following does Amazon GuardDuty threat
    intelligence store?

    1. IP addresses

    2. Subnets

    3. CIDR blocks

    4. None of these

  17. You are a consultant asked to assess a client's AWS network.
    However, you have no access to individual hosts. Which of the
    following can you perform using Amazon Inspector?

    1. Only host assessments

    2. Only network assessments

    3. Both host and network assessments

    4. Nothing; you need access to hosts for running any Amazon
      Inspector assessments.

  18. You want to set up Amazon Inspector to automatically assess
    new instances launched through an Auto Scaling scale-out
    event. Which of the following services would you use to set a
    security assessment to run when that happens?

    1. Amazon CloudWatch Events

    2. Amazon CloudWatch

      CHAPTER 15


    3. Amazon Inspector, which provides native access to Auto
      Scaling events

    4. You cannot monitor events using Amazon Inspector.

  19. Which of the following is not a valid security finding level for
    an Amazon Inspector rule?

    1. High

    2. Low

    3. Informational

    4. Notification

  20. Which of the following services gives you access to Amazon
    Inspector assessment metrics?

    1. Amazon CloudWatch

    2. Amazon CloudWatch Events

    3. Amazon CloudTrail

    4. None of these

CHAPTER 16



  1. Click Add Rule.

  2. For Rule Number, type 100.

  3. For Type, select All Traffic.

  4. Click Save.


You've created your first NACL. Remember that NACLs are
applied at the subnet layer, so they are ideal as a first level of
defense, with security groups providing the next layer of defense
at the host level.



Review Questions

You can find the answers in the Appendix.


  1. Which of the following are not available to IPv6 networks?
    (Choose two.)

    1. NAT instances

    2. VPCs

    3. VPC endpoints

    4. NAT gateways

  2. How many IP addresses are available in a CIDR block with a

    /16 mask?

    A. 256

    B. 4,096

    C. 65,536

    D. 1,048,576

  3. You need to support 16 hosts in a new subnet and want to
    assign the very smallest possible CIDR block in which these

CHAPTER 16

hosts will all reside. What is the size of the CIDR block you'd
choose?

A. /30

B. /29

C. /28

D. /27

  1. You have a CIDR block that has a /20 sized pool of IP
    addresses. How many bits are available for the host part of an
    address in this scenario?

    1. 20

    2. 10

    3. 12

    4. 22

  2. You have a new VPC and are launching an EC2 instance into
    the VPC with the intent of serving IPv6 requests. However,
    incoming IPv6 requests are not being handled by the new
    instance. What could be the problem? (Choose two.)

    1. There is no IPv6 CIDR block associated with the VPC.

    2. There is no IPv6 CIDR block associated with the target EC2
      instance.

    3. There is no IPv6 IP address assigned to the VPC.

    4. There is no IPv6 IP address assigned to the target EC2
      instance.

  3. What is the default size of an IPv6 CIDR block?
    A. /32

    B. /48

    C. /56

    D. IPv6 does not use CIDR blocks.

  4. YCouHarAe rPespTonEsiRble 1for6converting an IPv4 subnet with
    multiple instances to use IPv6 addresses. You have a specific
    set of IPv6 addresses you want to use. How do you set up a VPC
    to use these specific IPv6 addresses?

    1. You configure the IPs you want to use during VPC creation.

    2. You can only configure IPv6 addresses using the AWS CLI.

    3. You can't; IPv6 addresses are supplied by the Internet
      registrar at random.

    4. You can't; IPv6 addresses are supplied by AWS from the
      pool of IPv6 addresses owned by Amazon.

  5. You are responsible for converting an IPv4 subnet with
    multiple instances to use IPv6 addresses. You have a specific
    set of IPv6 addresses you want to use. How do you set up a VPC
    to use these specific IPv6 addresses?

    1. You configure the IPs you want to use during VPC creation.

    2. You can only configure IPv6 addresses using the AWS CLI.

    3. You can't; IPv6 addresses are supplied by the Internet
      registrar at random.

    4. You can't; IPv6 addresses are supplied by AWS from the
      pool of IPv6 addresses owned by Amazon.

  6. You have inherited over 22 VPCs from a previous SysOps
    administrator, many of which are very small—ranging in size
    from /26 to /28. You want to consolidate the VPCs into a single,
    large new VPC and then use multiple subnets. What is the
    largest allowable VPC you can create?

    A. /24

    B. /16

    C. /8

    D. There is no limit to the size of the netmask allowed for
    custom VPCs.

    CHAPTER 16

  7. You are newly responsible for a number of operational
    applications. Each application should have a development,
    testing, and production environment, with both public and
    private components such as web servers (public) and database
    servers (private). There are nine applications and you want to
    limit a VPC to having no more than three applications hosted.
    You also don't want to have different environments within the
    same VPC. You want redundancy for every resource, so what is
    the minimum number of subnets you'd need?

    1. 9

    2. 18

    3. 36

    4. 54

  8. Which of these must be present for a subnet to be considered
    public? (Choose two.)

    1. It exists within a VPC that has an Internet gateway
      attached.

    2. It has a CIDR block with public IP addresses.

    3. It has a route to a public subnet.

    4. It has a route to an Internet gateway.

  9. You are administrating a well-configured AWS network that has
    a VPC that uses an egress-only Internet gateway. Why would
    this type of Internet gateway be necessary? (Choose two.)

    1. A public subnet within the VPC must communicate
      outward to the Internet.

    2. A private subnet within the VPC must communicate
      outward to the Internet.

    3. A subnet within the VPC uses only IPv4 addresses.

    4. A subnet within the VPC uses only IPv6 addresses.

      CHAPTER 16

  10. Traffic is set up to flow from an instance within a private
    subnet out to the public Internet. Which of the following is a
    possible path that traffic could take?

    1. Instance Internet

    2. Instance NAT device Internet gateway Internet

    3. Instance NAT device virtual private gateway

      Internet

    4. Instance Internet gateway Internet

  11. You have a private subnet with multiple instances within it, and
    you want several of these instances to be able to access the
    Internet. You also want to schedule all outbound access within
    a brief 10-minute window at 2:00 a.m. When that access occurs,
    the instances will download hundreds of gigabytes of patch
    definitions. What device is most appropriate for this scenario?

    1. NAT instance

    2. Internet gateway

    3. Virtual private gateway

    4. NAT gateway

  12. You have an S3 bucket that stores documents and records that
    are required by one of your high-volume applications. You want
    to maximize performance and minimize network latency. What
    might you use to speed access from the applications to the S3
    bucket?

    1. Multipart transfer

    2. Interface endpoint

    3. Gateway endpoint

    4. S3 transfer acceleration

  13. For which of the following services could you not use an
    interface endpoint to access?

    CHAPTER 16

    1. Amazon CloudFormation

    2. Amazon DynamoDB

    3. Amazon Kinesis

    4. AWS CloudTrail

  14. Which of the following are required components of an AWS
    VPN connection? (Choose two.)

    1. Internet gateway

    2. Direct Connect gateway

    3. Customer gateway

    4. Virtual private gateway

  15. You are securing a subnet that contains a number of private
    instances. You want to ensure that databases are reachable by
    web servers in AWS. However, these web servers also have
    security groups that you want to respect and that have been set
    up by another AWS SysOps administrator. What is the best
    approach to use for securing the database-containing subnet?

    1. Use elastic IPs for the database servers and use those IPs
      in the subnet's NACLs.

    2. Do not provide a default route in the database instances’
      security groups.

    3. Use the security group of the instances containing the web
      servers as the incoming source for allowing traffic to the
      databases.

    4. Ensure port 3306 is open but all other ports are closed for
      AWS traffic.

  16. In what order are rules in a NACL evaluated?

    1. Top to bottom

    2. Bottom to top

      CHAPTER 16


    3. From the lowest-numbered rule to the highest-numbered
      rule

    4. From the highest-numbered rule to the lowest-numbered
      rule

  17. You have created a new subnet within the default VPC. You
    then added a new rule, numbered 150, to reject all incoming
    traffic. You are still seeing traffic allowed into the subnet. What
    is the problem with your configuration?

    1. You also need to reject traffic at the security group level.

    2. The default VPC always allows in all traffic and this cannot
      be changed.

    3. You need to remove the Internet gateway on the default
      VPC.

    4. Your NACL rule is higher than rule 100, which by default
      allows in all traffic. You need to move your deny rule to a
      number lower than 100 to take effect before rule 100.

CHAPTER 17




  1. Click Create Record Set.

  2. For Name, type www.

  3. For Value, type the IP address of the second web instance.

  4. For Routing Policy, choose Failover and select Secondary.

  5. Choose No for Associate With Health Check.

  6. Click Create.


You have now created a failover set. Based on our test solution,
if you went to
www.sometestorg.com, you could be directed to
the WEB1 server. If the WEB1 server goes down (say you
stopped it), then
www.sometestorg.com would direct you to the
WEB2 server. The only way to test this is to own the domain
name in question; you can purchase a domain to play with on
AWS for a reasonable cost. The domain I use in these exercises
is one that I purchased.



Review Questions

You can find the answers in the Appendix.


  1. What is the reasoning behind the name of Amazon's Route 53
    service?

    1. There are 53 types of record sets allowed by DNS.

    2. Port 53 is the port over which DNS operates.

    3. DNS allows up to 53 entries in a single record set.

    4. Port 153 is the port over which DNS operates.

  2. Which of the following is not a record type supported by Route
    53?

    1. NAPTR

    2. NS

CHAPTER 17

C. SPF

D. TEXT

  1. You are setting up a new website for a client and have their
    website loaded into an S3 bucket. They want to ensure that the
    site responds to the company name—
    wisdompetmedicine
    both with and without the
    www part of the address. What types
    of records do you need to create? (Choose two.)

    1. CNAME

    2. A

    3. MX

    4. SRV

  2. You are setting up DNS for an application running on an EC2
    host in your network. The application exposes its API through
    an IPv6 address; what type of recordset will you need to create
    for access to this API?

    1. AAAA

    2. A

    3. ALIAS

    4. MX

  3. You have a Lambda-based serverless application. You have
    several
    Lambda@Edge functions triggered by a CloudFront
    distribution and need to set up DNS. What type of records
    would you need to use?

    1. CNAME

    2. A

    3. Alias

    4. AAAA

      CHAPTER 17

  4. You have an application running in a VPC with an existing DNS
    record. You have a backup of the application running as a warm
    standby in another VPC in a different region. If traffic stops
    flowing to the primary application, you want traffic to be routed
    to the backup. What type of routing policy should you use?

    1. Simple routing

    2. Failover routing

    3. Latency routing

    4. Multivalue answer

  5. You have a fleet of EC2 instances serving content through
    application load balancers in multiple regions. You want to
    ensure that all available hosts can respond to traffic. Which
    routing policy should you use?

    1. Simple routing

    2. Failover routing

    3. Latency routing

    4. Multivalue answer

  6. You have an application running with copies in three different
    regions: US East 1, US West 1, and AP East 1. You want to
    ensure your application's users always receive a response from
    the copy of the application with the lowest network traffic
    response time. Which routing policy should you use?

    1. Simple routing

    2. Failover routing

    3. Latency routing

    4. Multivalue answer

  7. Which of the following routing policies does not allow you to
    provide multiple hosts for resolution?

    1. Simple routing

      CHAPTER 17

    2. Failover routing

    3. Latency routing

    4. All of these policies allow for multiple hosts.

  8. You are responsible for a marketing website running in AWS.
    You have a requirement from the marketing team to provide an
    alternate version of the site intended for A/B testing with the
    current site. However, they only want a small portion of traffic
    sent to the new version of the site as they evaluate the changes
    they've made. Which routing policy should you use?

    1. Multivalue answer

    2. Failover routing

    3. Weighted routing

    4. Geolocation routing

  9. You are examining a weighted routing policy with three
    destination hosts, with values of 10, 80, and 50. You want to
    ensure that the first host receives 20 percent of the site traffic,
    the second host receives 50 percent, and the third host receives
    all remaining traffic. To what should you change these values?

    A. 20, 50, *

    B. 5, 15, 80

    C. 10, 25, 15

    D. 20, 50, 50

  10. Which of the following is not possible when using AWS Private
    DNS?

    1. Setting up private DNS without using a VPC

    2. Exposing records to other AWS VPCs

    3. Exposing records to other AWS regions

    4. Exposing records to other AWS accounts

      CHAPTER 17


  11. Which of the following can you not do using private DNS?
    (Choose two.)

    1. Block particular domains in a VPC.

    2. Configure DNS failover for the privately hosted zone.

    3. Expose a private DNS record selectively to the Internet.

    4. Create health checks for instances that only have private IP
      addresses.

  12. Which of the following must you configure to control how
    traffic is routed from around the world to your applications
    using Amazon Route 53 Traffic Flow? (Choose two.)

    1. Traffic record

    2. Traffic policy

    3. Policy record

    4. Policy route

  13. You have taken over a domain that has a working traffic policy
    and policy record. You now want to point additional DNS names
    at that domain and ensure that the existing traffic flows are
    maintained. How would you accomplish this? (Choose two.)

    1. Create CNAME records for each new DNS name and point
      the CNAME at the domain with an existing traffic policy
      and policy record.

    2. Create A records for each new DNS name and point the A
      at the domain with an existing traffic policy and policy
      record.

    3. Create Alias records for each new DNS name and point the
      Alias at the domain with an existing traffic policy and
      policy record.

    4. Create AAAA records for each new DNS name and point the
      AAAA at the domain with an existing traffic policy and
      policy record.

      CHAPTER 17

  14. Which of the following is a not a type of health check offered by
    Amazon Route 53?

    1. Endpoint monitoring

    2. Other health check monitoring

    3. CloudTrail monitoring

    4. CloudWatch monitoring

  15. What happens in Amazon Route 53 if an unhealthy response
    comes back from a health check? (Choose two.)

    1. Responses are no longer sent to the failing host.

    2. When the host comes back online, responses are
      automatically sent back to the host.

    3. All responses to the failing host are retried until a response
      is received.

    4. A CloudWatch alarm is automatically triggered and sent
      out via notification.

  16. Which factor is the determinant in deciding where traffic flows
    when Amazon Route 53 has a latency-based routing policy in
    place?

    1. The closest region to the requestor

    2. The region with the lowest latency to the requestor

    3. The region with the most available network resources

    4. The weighting set in the routing policy

  17. Why might you use a geoproximity routing policy rather than a
    geolocation routing policy?

    1. You want to increase the size of traffic in a certain region
      over time.

    2. You want to ensure that all U.S. users are directed to U.S.-
      based hosts.

      CHAPTER 17

    3. You want to route users geographically to ensure
      compliance issues are met based on requestor location.

    4. You are concerned about network latency more than
      requestor location.

  18. You are seeing intermittent issues with a website you maintain
    that uses Amazon Route 53, a fleet of EC2 instances, and a
    redundant MySQL database. Even though the hosts are not
    always responding, traffic is being sent to those hosts. What
    could cause traffic to go to these hosts? (Choose two.)

    1. You need to use a failover routing policy to take advantage
      of health checks on hosts.

    2. You need to turn on health checks in Amazon Route 53.

    3. The hosts are failing a health check but not enough times
      in a row to be taken out of service by Amazon Route 53.

    4. The hosts should be put behind an application load
      balancer (ALB).

CHAPTER 18




successfully completed your first CloudFormation deployment.
These sample templates are a fantastic way to learn, and you can
always view the templates you have created by clicking your
stack and selecting the Template tab shown in
Figure 18.1.


image

Figure 18.1 You can click the Template tab to view the JSON
template for the stack that you created.



Review Questions

You can find the answers in the Appendix.


  1. In what ways can you use AWS services to automate and
    minimize overhead in an IaaS environment? (Choose two.)

A. Ensure deployments to all environments are identical.

CHAPTER 18

  1. Build instances in an identical way across all
    environments.

  2. Replace manual steps with JavaScript to minimize console
    usage.

  3. Define environments in XML rather than console-based
    manual steps.

  1. Which of the following AWS tools and services is not closely
    associated with CloudFormation?

    1. YAML

    2. AWS API

    3. AWS SDK

    4. JSON

  2. What does the AWSTemplateFormatVersion section of a
    CloudFormation template indicate?

    1. The date that the template was originally written

    2. The date that the template was last processed

    3. The capabilities of the template based on the version
      available at the indicated date

    4. The date that the template was last updated

  3. What is the only required component in a template?

    1. Parameters

    2. Metadata

    3. Resources

    4. Outputs

  4. In what section of a CloudFormation template would you
    indicate values that are either dynamic or should be processed
    at runtime?

    1. Inputs

      CHAPTER 18

    2. Parameters

    3. Variables

    4. Metadata

  5. Which of the following statements about resources and their
    names are true with regard to CloudFormation? (Choose two.)

    1. You can assign actual resource names to a resource when
      you create it.

    2. You can assign logical names to a resource when you create
      it.

    3. You can map a logical name to a specified AWS resource
      name in a template.

    4. You cannot assign actual resource names to AWS resources
      via CloudFormation.

  6. You want to create a number of different environments but
    allow for easy separation of those environments via billing.
    How can you best accomplish this using CloudFormation?

    1. Assign each environment different names.

    2. Assign each resource in an environment a set prefix (like

      dev-[resource name]).

    3. Assign tags to each resource in an environment.

    4. You cannot differentiate resources created in a single
      CloudFormation template.

  7. You are running a complicated CloudFormation stack, and
    you're encountering errors that don't occur until most of the
    stack has been executed. You're finding that it takes nearly an
    hour to clean up all the resources created by the stack before
    trying it again. How can you reduce this cleanup time?

    1. Enable Automatic Rollback On Error.

      CHAPTER 18

    2. Build a second CloudFormation template to tear down all
      resources that you can then run as needed.

    3. Disable Automatic Rollback On Error.

    4. Enable the CleanupResources option within your template.

  8. You have a stack that creates a number of EC2 instances and
    then initiates scripts on each instance. However, the next steps
    in your stack are failing because they depend on resources that
    those scripts configure, and the stack is executing before the
    scripts complete. How can you overcome this problem?

    1. This is not possible using CloudFormation.

    2. You need to use the WaitCondition resource to block
      further execution until the scripts on the instances
      complete.

    3. You need a separate CloudFormation stack, and you have
      to set your initial stack to call the second stack.

    4. You need a separate CloudFormation stack that you can
      run manually after the scripts on your instances complete.

  9. Which of the following can you not create using
    CloudFormation?

    1. VPC

    2. NACL

    3. Elastic IP

    4. You can create all of these using CloudFormation.

  10. From which of the following can you not execute a
    CloudFormation stack?

    1. AWS CLI

    2. AWS API

    3. AWS SDK

      CHAPTER 18


    4. You can execute CloudFormation from all of these.

  11. What is the difference between an instance and a template with
    regard to CloudFormation?

    1. A template specifies what should occur, and an instance is
      a specific run of that template.

    2. An instance specifies what should occur, and a template is
      a specific run of that instance.

    3. An instance is a function that runs your template.

    4. A template is a function that runs your instance.

  12. Which of the following is not allowed as a data type for a
    parameter?

    1. List

    2. Comma-delimited list

    3. Array

    4. Number

  13. You want to accept custom CIDR blocks as inputs to your
    CloudFormation stack. What validation might you use to
    ensure the CIDR block is correctly formatted as an input
    parameter?

    1. AllowedValues

    2. MinLength

    3. ValueMask

    4. AllowedPattern

  14. What does AWS refer to the set of resources created by a
    template instance as?

    1. A stack set

    2. A stack

    3. An instantiation

      CHAPTER 18

    4. An instance run

  15. You are building a number of CloudFormation templates to be
    executed by several members of the operations team. However,
    these templates require a number of sensitive passwords that
    you don't want to be shown as the template executes. How can
    you prevent these values from being shown?

    1. Mark the parameter as NoEcho.

    2. Mark the parameter as EchoOff.

    3. Mark the parameter as NoOutput.

    4. Mark the parameter as OutputOff

      .

  16. You want to ensure the URL to a web application created by a
    CloudFormation stack is captured. What element(s) would be
    used to accomplish this?

    1. A template parameter

    2. An output value

    3. A lookup data table

    4. A set of resources’ configuration values

  17. You want to supply a website URL to a stack that an API call
    will use as part of setting up an EC2 instance. What element(s)
    would be used to accomplish this?

    1. A template parameter

    2. An output value

    3. A lookup data table

    4. A set of resources’ configuration values

  18. You want to create several new EC2 instances using the latest
    AWS-supported version of an SUSE Linux AMI. What
    element(s) would be used to accomplish this?

    CHAPTER 18

    1. A template parameter

    2. An output value

    3. A lookup data table

    4. A set of resources’ configuration values

  19. You want a stack to pop up a dialog for entry of a database
    username when that database is being created. What
    element(s) would be used to accomplish this?

    1. A template parameter

    2. An output value

    3. A lookup data table

    4. A set of resources’ configuration values

CHAPTER 19




  1. When the build is finished and you see the dashboard for
    your application, click the URL in the breadcrumb area. The
    URL will end with
    <region-name>.elasticbeanstalk.com.

  2. When you see the Congratulations screen, you have
    deployed your first application!

To tear down the application so that you aren't charged for
anything further, click the Actions button in the application's
dashboard and choose Terminate Environment. You will be
asked to type the name of the environment to confirm. Enter the
name and click Terminate. Once you are on the application
dashboard, and it is gray with the words (Terminated), click the
Actions button again and choose Delete Application. Type the
name of the application and click Delete.



Review Questions

You can find the answers in the Appendix.


  1. Which of the following AWS architectural models does Elastic
    Beanstalk support? (Choose two.)

    1. Single-instance deployment

    2. Multi-instance deployment

    3. Load balancer and Auto Scaling group

    4. Redundant deployment

  2. Which of the following architectural models is targeted at web-
    based production environments?

    1. Single-instance deployment

    2. Multi-instance deployment

    3. Load balancer and Auto Scaling group

    4. Auto Scaling group only

CHAPTER 19


  1. Which of the following architectural models is targeted at
    database instances running in a production environment?

    1. Single-instance deployment

    2. Multi-instance deployment

    3. Load balancer and Auto Scaling group

    4. Auto Scaling group only

  2. Which of the following are required in a platform.yaml file?
    (Choose two.)

    1. A provisioner template

    2. The number of instances to deploy

    3. A name for the custom platform being defined

    4. A version number

  3. What is the purpose of the custom_platform.json file required
    in defining a custom platform? (Choose two.)

    1. It defines the AMI source, name, and region to use.

    2. It defines the number of instances to create.

    3. It defines the variables used by the custom platform.

    4. It defines the supported languages in the custom platform.

  4. Which of the following are supported deployment models in
    Elastic Beanstalk? (Choose two.)

    1. Rolling with additional batches deployment

    2. Rolling with incremental updates deployment

    3. Mutable deployment

    4. Immutable deployment

  5. Why might you choose to use a rolling with additional batches
    deployment? (Choose two.)

    CHAPTER 19


    1. You don't want your application to completely stop when
      updates are made.

    2. You want the cheapest possible deployment model.

    3. You must always maintain maximum capacity in terms of
      running instances.

    4. You never want two versions of an application running at
      one time.

  6. You are responsible for a critical production application that
    must always be up and running. Cost is not an option, and
    ensuring that any new instances are healthy before accepting
    traffic is a requirement. Which deployment model should you
    use?

    1. Rolling with additional batches deployment

    2. All-at-once deployment

    3. Rolling deployment

    4. Immutable deployment

  7. You are running a production environment that serves
    thousands of customers. You have a no-downtime requirement
    for updates but are allowed to perform updates in times of day
    when usage of the application is minimal. Which is the most
    cost-effective approach to deployment in this scenario?

    1. Rolling with additional batches deployment

    2. All-at-once deployment

    3. Rolling deployment

    4. Immutable deployment

  8. Which of the following can you not customize when using
    Elastic Beanstalk?

    1. Load balancer properties

    2. Monitoring policies

      CHAPTER 19


    3. AMIs used for instances

    4. You can configure all of these.

  9. Which of the following would be required to set up a
    blue/green deployment? (Choose two.)

    1. Amazon Route 53

    2. Elastic Beanstalk

    3. Multiple application environments

    4. Amazon RDS

  10. How is security in an Elastic Beanstalk environment different
    from security in a manually managed environment?

    1. Elastic Beanstalk manages the security of your
      environment completely.

    2. Elastic Beanstalk automatically protects the root account
      with MFA.

    3. Elastic Beanstalk manages security in the cloud as well as
      security of the cloud.

    4. Security is the same in both environments.

  11. Which of the following are managed policies provided by
    Elastic Beanstalk? (Choose two.)

    1. AWSElasticBeanstalkWriteAccess

    2. AWSElasticBeanstalkReadOnlyAccess

    3. AWSElasticBeanstalkReadWriteAccess

    4. AWSElasticBeanstalkFullAccess

  12. Which of the following is true about a default Elastic Beanstalk
    deployment?

    1. All instances created are private.

    2. A custom private VPC is created.

      CHAPTER 19


    3. All database instances are private.

    4. The created application endpoint is publicly available.

  13. How should you manage access to your Elastic Beanstalk
    applications and deployments?

    1. Through the Elastic Beanstalk user management console

    2. Through the Elastic Beanstalk CLI tool

    3. Through the AWS Console using IAM permissions and
      roles

    4. Through the AWS Console using EB permissions and roles

  14. Which of the following are required to access the Elastic
    Beanstalk API? (Choose two.)

    1. A user's access key

    2. A user's Elastic Beanstalk username

    3. A user's AWS username

    4. A user's secret key

  15. Which databases are available for use on Elastic Beanstalk?

    1. MySQL, PostgreSQL, DynamoDB, and SQL Server

    2. Any database available through AWS as long as it supports
      read replicas

    3. Any relational database available through AWS but not any
      of the NoSQL databases

    4. Any database available through AWS

  16. Who is responsible for updates to the underlying Elastic
    Beanstalk environment, such as Java or Tomcat updates?

    1. AWS

    2. You, the user

      CHAPTER 19


    3. AWS is responsible for major updates, and you are
      responsible for minor updates.

    4. You are responsible for major updates, and AWS is
      responsible for minor updates.

  17. Why might you use the Clone An Environment option within
    Elastic Beanstalk? (Choose two.)

    1. You want to perform a minor version update on the Java
      version in your environment.

    2. You want to create a new environment to make changes to
      without affecting your existing running environment.

    3. You want to test a major version update on the Java
      version before deploying it to your running environment.

    4. You want to test a set of IAM permissions before rolling
      them out.

  18. Which of the following does Elastic Beanstalk store in S3?
    (Choose two.)

    1. Server log files

    2. Database swap files

    3. Application files

    4. Elastic Beanstalk log files


Appendix

Answers to Review Questions

Chapter 1: Introduction to Systems
Operations on AWS

  1. B. AWS Organizations is the AWS service for managing and
    organizing multiple accounts.

  2. C. CloudTrail provides an API call tracker for services that
    interact within AWS. It provides compliance and tracking but is
    also ideal for simply watching API traffic.

  3. B. CloudWatch is the core AWS monitoring tool. While
    CloudTrail provides API tracking, it's CloudWatch that is ideal
    for full application monitoring.

  4. C. Auto Scaling in AWS is the process by which application
    resources can be added to or removed from a group, and scale
    to meet demand.

  5. A, C. The obvious answer here is Auto Scaling groups, which
    are the core of AWS’ scalability solutions. In addition to that, an
    elastic load balancer is key to routing traffic to various
    instances. While CloudFront and Lambda can be used in
    scalable applications, neither is really required.

  6. A, C. The key here is understanding the acronyms. EBS is
    elastic block storage, and RDS is the AWS relational database
    service, both of which are storage services. EC2 is Elastic
    Compute Cloud and is a compute service, and VPC stands for
    virtual private cloud and is concerned with networking.

  7. B. Option B, Identity and Access Management, is the AWS IAM
    service, and is usually called simply IAM. It is the approach
    AWS provides for user management, as well as handling
    groups, roles, permissions, policies, and the like.

  8. A, D. The shared responsibility model makes this distinction:
    you are responsible for security
    in the cloud, and AWS is
    responsible for the security
    of the cloud. This means that AWS
    provides secure resources and infrastructure, and you the

    customer provide security of the resources and applications you
    deploy into the cloud.

  9. A. It can seem as if the shared responsibility answer is best
    (option D), but customers actually have no deep access into
    regions or availability zone infrastructure. As a result, it is
    wholly up to AWS to secure these constructs.

  10. A. AWS VPC, the virtual private cloud, is AWS’ basic
    networking building block. A VPC contains subnets and
    instances within those subnets.

  11. B. CloudFormation is AWS’ deployment mechanism. Written
    in JSON, CloudFormation provides templates that can be used
    to create standardized application templates for deployment.

  12. A. AWS offers four support plans: Basic, Developer, Business,
    and Enterprise. There is no such thing as a Free plan, although
    there is a free
    tier of AWS access.

  13. A. A network access control list (ACL) behaves somewhat
    similar to a firewall in an on-premises architecture. Network
    ACLs (NACLs) aren't replacements for firewalls, because there
    is not an exact 1-to-1 mapping between cloud components and
    on-premises ones. However, NACLs explicitly determine the
    types and ports of traffic that are allowed into and out of
    Amazon VPCs, and so are similar in nature to a firewall.

  14. B. There are two primary resources for real-time
    administration interaction with AWS: the console and the CLI
    (command-line interface). The console is web-based, so the CLI
    is the better answer here.

  15. A, C. The key to this question is the term “network
    environment.” While you would likely use EC2 instances and
    databases via RDS in hosting a web application, the question
    asks specifically how to create the actual hosting environment
    itself. For that task, you'll need a Virtual Private Cloud (VPC) to
    construct the actual networking space (including subnets) and

    CloudFormation for repeatable deployments of that
    infrastructure.

  16. D. The AWS Service Level Agreement (SLA) defines how AWS
    responds to outages and service degradations, and it includes
    specifics for every service in terms of response and uptime.

  17. B. The AWS Shared Responsibility Model lays out the roles and
    responsibilities of both users and AWS itself in relation to the
    cloud environment.

  18. B. A region is a separate geographic area in which AWS has
    availability zones and in which services run.

  19. D. AWS regions do not have a set number of availability zones.
    In fact, regions often have AZs added or removed based on
    usage.

  20. C. An AWS region is a geographic area within AWS. Within
    each region there are availability zones, which function as
    virtual datacenters.

Chapter 2: Amazon CloudWatch

  1. C. By default, CloudWatch collects metrics every 5 minutes,
    although you can modify this frequency to as little as 1 minute.

  2. D. CloudWatch provides a number of metrics, and their names
    aren't always easy to recall. Here, it's
    VolumeThroughputPercentage that you want.

  3. B. Step one here is to recognize that most CloudWatch metrics
    report in seconds rather than minutes. This means you can
    eliminate options A and C. Of options B and D, B is correct:
    VolumeIdleTime reports on how long the volume was idle with
    no I/O occurring.

  4. C. CloudWatch's most basic—and often most useful—metric on
    compute is CPUUtilization, which reports as a percentage of
    how much of the instance's CPU is currently in use.

  5. C. A resource group is primarily used to group resources that
    need to be viewed, metriced, and reacted to as a single unit,
    ideally on a single dashboard (option C). Resource groups have
    nothing to do with multi-region or multi-AZ setups (and
    CloudWatch is not limited by either), and they do not have
    anything to do with default versus nondefault metrics (option
    D).

  6. B. You do not have to stop or terminate a running instance to
    enable detailed monitoring. You simply select the instance in
    the AWS management console and select Enable Detailed
    Monitoring (under the Actions
    CloudWatch Monitoring
    menu).

  7. A. Resource groups are organized based on user-defined tags
    attached to resources.

  8. A. Memory is not provided as a standard CloudWatch metric,
    and you'd need to create a custom metric for reporting on it.

  9. B. CloudWatch can check status as often as 1 minute, either in
    a custom metric or in detailed monitoring, if that metric is
    standard. A high-resolution metric can be created as a custom
    metric and check more frequently.

  10. B, D. CloudWatch offers two monitoring levels: Basic and
    Detailed.

  11. A, D. CloudWatch does not provide memory reporting by
    default, and throughput is not a metric reported on for EC2,
    which is compute-related rather than networking-related.

  12. B. This is pretty esoteric, but it is unfortunately the type of
    thing AWS might ask on an exam. Auto Scaling groups created
    via the console use basic monitoring, whereas groups created
    via the CLI use detailed monitoring by default. Strange, but
    true.

  13. C. CloudWatch has limited ability to report on memory usage,
    which is why memory isn't a default CloudWatch metric.
    Option C, responding to thread count, isn't something that
    CloudWatch can monitor—and is related to memory again.
    You'd need a third-party tool for that sort of metric.

  14. C. You can eliminate options A and B immediately, as
    CloudWatch cannot collect metrics more often than once a
    minute. This leaves options C and D. Option D is out as it
    would certainly affect the system's overall performance;
    turning off processes typically is viewed as interruptive. This
    leaves option C: adding a metric and seeing if the traffic out of
    the suspect EC2 instance correlates to the traffic into the
    DynamoDB instances.

  15. B. When detailed monitoring is enabled, CloudWatch will
    update every minute. This is the most frequent option; the
    default option is 5-minute increments.

  16. C. The key here is that the metric in question is high
    resolution. High-resolution metrics are custom and are not
    constrained by the rules of standard CloudWatch metrics. They

    can publish as often as every second (although not more
    frequently).

  17. D. CloudWatch Events are triggered by changes in a resource's
    state (like an EC2 instance starting up, option B), logins to the
    console or access of the AWS API (option C), scheduled triggers
    (option A), or code-based triggers. This leaves option D; API
    calls to programmatic APIs within your code are best monitored
    by CloudTrail and are not going to generate CloudWatch
    Events.

  18. B. AWS uses (not surprisingly) the AWS prefix to their
    namespaces: AWS/DynamoDB, AWS/S3, and so forth.

  19. D. CloudWatch defines alarms in terms of predefined
    thresholds that are absolute, rather than relative to existing
    conditions. In other words, while you can monitor a metric
    hitting a specific high or low value (such as latency over 10 ms
    or output at 0 bytes), you cannot define a metric that measures
    usage relative to that same metric at an earlier point in time—
    and that's exactly what option D describes. You'd need to write
    custom code to read a metric and compare it with stored values
    from that same metric at earlier points in time to accomplish
    option D. That makes it the answer that would require custom
    programming.

  20. A. A rule indicates how an event should be routed. It
    potentially matches an event and, in the case of a match, sends
    that event on to a target.

Chapter 3: AWS Organizations

  1. B, C. AWS Organizations provides management of multiple
    accounts in one place (option C). It also typically aggregates
    account costs, and the higher summative costs are eligible for
    AWS volume discounts (option B).

  2. B, C. IAM provides users, groups, roles, and permissions. AWS
    Organizations provides organizational units and service control
    policies, as well as consolidated billing features. The
    components of AWS Organizations are not part of IAM (and
    vice versa).

  3. B. AWS Organizations groups accounts into organizational
    units (OUs), allowing for groupings of permissions and roles.

  4. A. An SCP in AWS Organizations is a service control policy and
    can be applied to an organizational unit (OU) to affect all users
    within that OU. It effectively applies permissions at an
    organizational level, in much the same way that a group applies
    them at a user level.

  5. C. Service control policies (SCPs) are applied to OUs
    (organizational units) in AWS Organizations.

  6. A. Service control policies (SCPs) are permission documents
    that can be applied to accounts and organizational units.

  7. B, C. Organizational units and accounts are AWS Organizations
    constructs to which SCPs can be applied. Users and groups are
    IAM constructs to which policies can be applied.

  8. B. AWS Organizations does not provide for batched or
    automated account creation, although it does make creating
    multiple accounts with similar organization and structure
    simple.

  9. A. IAM should be used for access management, especially
    when dealing with a single account, as described in this
    question.

  10. C. While this is a permissions question, and therefore related
    to IAM, whenever you have what amounts to a companywide
    (or organizationwide) policy, AWS Organizations is likely the
    best approach. Here, a service control policy could be applied
    across all accounts restricting access to SSH.

  11. D. The most significant issue with tagging resources and using
    those tags to manage billing is that a number of AWS services
    are difficult to tag, as they are system-level services that are not
    exposed in the same manner as resources like EC2 instances,
    containers, and managed services. Additionally, some services
    are not readily identifiable, creating confusion. AWS
    Organizations addresses all of these problems.

  12. C. You do not receive any discounts on standard AWS fees,
    including those associated with moving data across regions
    (option C). However, you certainly could receive discounts on
    those fees based on volume achieved by combining all account
    usage, rather than treating each account separately (to which
    option D alludes).

  13. B. In an AWS Organizations multi-account setup, all reserved
    instances will use the lowest hourly price from any account in
    the organization. This means that all accounts effectively
    benefit from any member account's lowest rate. This is a lesser
    known advantage of AWS Organizations but can have
    significant cost impact if a lot of reserved instances are being
    used.

  14. A, B. This is a pretty classic use case for AWS Organizations.
    You could use organizational units to organize accounts and
    service control policies to standardize resource permissions and
    access. Consolidated billing is a feature that would provide
    value here, but it isn't something you set up as much as it is
    something that you'd take advantage of. Resource tagging
    would not apply, because you'd be using AWS Organizations for
    billing management.

  15. C, D. Consolidated billing and resource tagging are both
    features that would be useful for centralizing the billing of
    multiple accounts. Organizational units and service control
    policies are useful for management from a system
    administration point of view, but not so much from a billing
    point of view.

  16. B. Every organization in AWS Organizations should have a
    single master account. All other accounts are controlled and
    organized by this account.

  17. A. Using organizations is ultimately about multi-account
    management, and every organization should have a master
    account and one or more member accounts. While you could
    potentially create an organization with just a single master
    account, it wouldn't make much sense and would also go
    against AWS best practice.

  18. A. This is a case where the answer might be a bit unintuitive
    (and unfortunate). A single account can only belong to a single
    organizational unit. This means that you can't have an account
    in both a production and an EastCoast OU, for example.

  19. B. You can nest OUs in AWS Organizations, but that nesting
    functions similar to account membership in an OU. A single OU
    can belong to one other OU at any time, but no more than one.

  20. A, D. AWS Organizations has replaced consolidated billing as
    the preferred option for managing multiple accounts together.
    To manage your accounts through one bill, you need to set up
    AWS Organizations (option A), which will require you to
    choose or create a master account for your organization (option
    D).

Chapter 4: AWS Config

  1. C, D. AWS Config provides both continuous monitoring and
    continuous assessment. Continuous deployment and
    continuous integration are part of the AWS developer toolset.

  2. C. The best way to notify people in an organization about
    configuration changes is to connect AWS Config directly to
    SNS, the Simple Notification Service. This service can then send
    out texts and other notification types to interested parties.
    While CloudWatch can receive messages and then send them
    out, it is a less direct and simple solution than SNS. CloudTrail
    is for auditing and API logging, and SQS is a queue service.

  3. B. AWS Config normalizes configurations and stores them in
    Amazon Simple Storage Service (S3). You'll need to be careful
    here, as DynamoDB is a useful service for configuration
    information; it stores key:value pairs. However, AWS Config
    uses S3 for this purpose.

  4. B, D. This is a tough question and needs to be read carefully. A
    configuration item contains basic information about a resource,
    configuration data for the resource (option C), a map of related
    resources (option A), AWS CloudTrail event IDs (not
    CloudWatch IDs, from option B), and metadata about the
    configuration item itself (not about connected resources). So
    both B and D are the correct selections, as neither are part of a
    configuration item.

  5. B, C. This is another difficult question. The keys when
    deciphering configuration items are configuration of the
    resource, intrinsic or identifying information about the
    resource, and information about the configuration item. In this
    case, that translates to the instance type of the EC2 instance
    (which is intrinsic to the resource) and the time that the
    configuration item was captured (metadata about the
    configuration item itself). While the user who created the
    instance and the time that the instance has been running are

    important, they are not specific to the configuration of the
    instance, and they do not uniquely identify the instance.

    Therefore, they're not part of a configuration item. (Note that
    the time a resource was created is reported, so you could
    calculate the running time of the instance, but that value is not
    directly reported.)

  6. B. Code to evaluate a custom rule should be put into a Lambda
    function. That function can then be associated with the rule in
    AWS Config.

  7. A, C. Rules can be triggered in two ways: by a configuration
    change or through a periodic frequency, which you set. In both
    cases, rules are evaluated when triggered.

  8. A, C. AWS Config provides relevant information to changes
    made to resources. In this case, that would include a record of
    who made the change (A) as well as the source IP address from
    which that change was requested (C). API calls (B) are the
    domain of AWS CloudTrail, and AWS console logins would be
    reflected in logs, not AWS Config.

  9. D. AWS Config doesn't affect how users actually use AWS,
    including the changes they make to configurations. It can only
    evaluate configurations after those changes are made. You'd
    need to use IAM permissions and roles and the AWS Service
    Catalog to prevent changes from happening at all.

  10. C. AWS Config is enabled on a per-region basis. However, it
    can be enabled and then disabled and then re-enabled again.
    Therefore, option C is correct.

  11. A. Continuous integration relates to automated testing of new
    code as its pushed into a version repository, which in this case
    is option A. In this set of answers, you're looking for references
    to actual code and then the testing of that code. The other
    options deal with deployment or configuration and are
    therefore not correct.

  12. D. AWS lets you create up to 150 rules per account. You can
    request that limit be raised if needed.

  13. A, B. A rule in AWS requires several pieces of information: an
    indication of whether the rule is change-based or periodic
    (option A) and a resource ID or type (option B). You can specify
    a tag key to match (option C), but it is not required, and you do
    not configure notifications on rules in the rule itself (option D).

  14. B, C. A periodic rule can be triggered every 1, 3, 6, 12, or 24
    hours. Lesser and greater frequencies are disallowed.

  15. D. AWS Config is itself an AWS resource that provides APIs.
    This means that you can use AWS CloudTrail to view logs of
    those API calls, including calls to create new rules.

  16. C. AWS Config returns a single evaluation for a resource, and
    that resource is compliant only if it is compliant for all rules
    that apply to the resource. In this case, since not all rules are
    compliant, the evaluation would return Noncompliant (option
    C).

  17. C. AWS Config is primarily concerned with providing point-in-
    time information about resources (option A) and to provide a
    baseline configuration that is considered acceptable (options B
    and D). CloudTrail would be used to determine the caller to a
    resource API.

  18. B, C. AWS Config allows you to work with configurations
    across accounts and regions using multi-account multi-region
    data aggregation (option C). While use of AWS Organizations is
    not required, it is recommended by AWS as a means to provide
    central account receipt of configuration reporting (option B).

  19. A, B. Three of these options are valid: you will need an S3
    bucket for storing aggregated information (option A), IAM
    policies to allow writing to that bucket (option B), and you can
    use an SNS topic to send out notifications (option C). However,
    the first two are required whereas setting up an SNS topic is

    optional, making the correct options A and B. There is no such
    service as AWS Log Aggregator (option C).

  20. C. AWS Config is itself an AWS resource that provides APIs.
    This means that you can use AWS CloudTrail to view logs of
    those API calls, including calls to create new rules.

Chapter 5: AWS CloudTrail

  1. A. CloudWatch is the choice for performance metrics.
    Performance is not the same as an API log. While API logs via
    CloudTrail might help in troubleshooting performance, they are
    not themselves measures of performance.

  2. B. Auditing is a key word for both CloudTrail and AWS Config.
    For API usage, though, CloudTrail is the correct choice.

  3. C. Configuration should pretty clearly point you to AWS
    Config, and that's absolutely the correct answer here.

  4. B. This is getting a bit meta, but CloudTrail is ideal for logging
    access to a service—and in this case, the AWS Config service.
    Remember that audit and log trails apply to all AWS services,
    including the monitoring services themselves.

  5. D. The key here is to understand that the default setting for a
    CloudTrail trail is to function in all regions. Therefore, any new
    Lambda functions in new regions will automatically be picked
    up. You don't need to perform any additional configuration.

  6. B. AWS allows five trails per region before you need to raise
    any predefined limits.

  7. D. You can write logs from AWS CloudTrail to any S3 bucket in
    any region, regardless of where other logs are being written or
    if the trail writing the logs is in a different region.

  8. A. EU West 2 already has the maximum number of allowed
    trails: three cross-region trails and two region-specific trails,
    adding up to five, the predefined limit.

  9. D. The problem here is EU West 1. That region has three cross-
    region trails, and an additional two region-specific trails, for a
    total of five. You will not be able to add any more trails—cross-
    region or specific to EU West 1—until one of the existing trails
    is removed.

  10. B. This isn't difficult but can trip you up—especially if you're
    already thinking about AWS CloudTrail. While AWS CloudTrail
    does log events related to API access, it does
    not send out
    notifications or alarms. That is the province of SNS.

  11. A, D. CloudTrail is the obvious portion of the answer, as it logs
    API access. But you'll want to use something like SNS to
    actually send out notifications. SWF is for workflow and not
    appropriate here. CloudWatch does provide monitoring and
    alarms but is geared at resource usage, not API access.

  12. A, B. CloudTrail provides API logging and can be used for
    monitoring, and CloudWatch monitors the underlying AWS
    resources. Both can be used to detect anomalies or unusual
    access patterns. SWF is a workflow tool, and Trusted Advisor
    makes recommendations but does not provide real-time
    monitoring.

  13. B. CloudTrail is the AWS service for logging and is particularly
    helpful for auditing and compliance.

  14. C. CloudTrail is on by default in AWS accounts. You can simply
    log in and begin viewing up to 90 days of account activity
    without any other setup (option C).

  15. D. AWS CloudTrail supports all of these services and, in fact,
    almost all available AWS services.

  16. D. When a trail is applied to all regions, a new trail is created in
    each region (option D), and all deliver activity to a single S3
    bucket. No additional trails are needed.

  17. D. By default, log files generated by CloudTrail are encrypted
    using S3 SSE (option A). You can also optionally turn on S3
    MFA Delete to further protect files in S3 (option C), and use
    SSE-KMS for CloudTrail log files (option B). Using customer-
    managed keys is not an option for CloudTrail logs (option D)
    and is the correct answer.

  18. C. The events logged by CloudTrail include who made the
    request (option A), the services used, the actions performed,
    the parameters for the action (option B), and the response
    returned by the service (option D). This leaves option C as not
    being reported: the username of the requestor.

  19. D. Logs are automatically decrypted by Amazon S3 and do not
    need any special work to be decrypted.

  20. C, D. All of these services could likely be used in some way to
    facilitate this monitoring. However, the question specifically
    asks about alarms and the CLI, which is an API client to
    Amazon. Therefore, the API calls could be recorded by
    CloudTrail (option C) and pushed to a CloudWatch Log (option

D) for processing or notification. While notification would be
used via SNS, the question doesn't specifically ask for a
notification mechanism.

Chapter 6: Amazon Relational Database
Service

  1. A. Amazon RDS primarily offers the ability to increase the size
    of a database instance without major hassle. This translates
    into scalability: you can scale up your database instances to
    handle growing usage (option A). However, this is not elastic;
    this process cannot be done automatically (option C) or in a
    brief moment of increased usage (option B). And network
    access to databases has little to do directly with RDS (option D).

  2. D. Options A, B, and C are all true of Auto Scaling policies but
    not of Amazon RDS. While Amazon RDS makes increasing the
    size of a database instance easy, as well as initial provisioning,
    it does not offer automatic instance changes or on-the-fly
    elasticity. Therefore, option D is correct.

  3. A. The key here is to remember that Amazon RDS does not
    handle scaling automatically. Therefore, it is quite possible that
    utilization hits 100 percent (option A) if you do not scale your
    database instances manually.

  4. C. Amazon RDS will patch your system automatically, but only
    when what is deemed a critical security or reliability patch is
    available (option C). This means that minor patches, or patches
    that don't affect security or reliability, are deferred (option B).

  5. A, B. Limiting access to a database instance can come in a few
    forms. IAM roles (option A) can provide a service-level
    restriction to Amazon RDS instances, and NACLs (option B)
    can provide restrictions at the subnet or VPC level. Option C
    looks correct, but user permissions apply to the database once a
    user has already accessed the instance and is therefore
    incorrect. Bastion hosts (option D) are not applicable here.

  6. B, C. Amazon RDS offers automated snapshots, which are
    taken daily (option B). You can also create a snapshot of your

    database at any time (option C). This is not limited to a
    maintenance window, either (option D).

  7. B. By default, Amazon RDS sets up automated backups with a
    7-day retention period.

  8. B. Read replicas do not have backups configured by default, as
    the primary instance is typically the instance backed up.

  9. D. In a multi-AZ configuration, the standby instance cannot be
    in the same availability zone as the primary instance.

  10. A. In a multi-AZ configuration, replication is done
    synchronously, not asynchronously.

  11. A, C. In a multi-AZ configuration, a failure triggers a number of
    events. The standby instance becomes the primary instance,
    and any DNS requests to the database will be resolved to the
    standby instance going forward.

  12. A. This should be a simple question to correctly answer. Any
    time you want to increase read performance, a read replica is
    going to significantly improve performance.

  13. D. Read replicas can be in the same availability zone as the
    primary instance, a different availability zone than the primary
    instance in the same region, or a different region than the
    primary instance altogether.

  14. C. In a multi-AZ configuration, the standby instance must be in
    a different availability zone but in the same region as the
    primary database instance.

  15. A, B. Read replicas are ideal for improved performance in high-
    read situations (option A), but not in high-write situations
    (option C). They are also great for reading data related to
    reporting (option B). They are not failover solutions (option D).

  16. C. Amazon Aurora volumes can be as large as 64 TB, and this
    same size limit applies to Aurora tables.

  17. C, D. Amazon Aurora can function as a drop-in replacement for
    both MySQL and PostgreSQL.

  18. A, D. AWS will both patch your database instances and take
    backups of them automatically (options A and D). However,
    AWS will not optimize queries and has no idea of your
    organization's compliance requirements.

  19. B. All of these options could potentially help the problem, but
    the question specifically mentions issues with write requests.
    Both ElastiCache (option A) and read replicas (option C) are
    aimed specifically at improving read requests. While this might
    lighten the overall load on the database instance and have an
    effect on write requests, only option B directly addresses the
    problem with a heavier-weight instance type.

  20. D. Any active connections to a failing instance typically fail or
    terminate abnormally as the instance to which they are
    connected cannot serve those requests (option D).

Chapter 7: Auto Scaling

  1. D. EC2 Auto Scaling can scale only instances. With launch
    templates, you can scale a group with both on-demand (option

    1. and spot instances (option B), making the correct answer
      option D.

  2. A, B. A launch configuration contains the ID of the AMI to use
    to launch an instance (option A), any block mappings (option
    B), a key pair for connecting, the instance type to launch, and
    one or more security groups for the instance.

  3. D. Be wary of any question that asks you to determine how
    many instances are running in a group at a given time. Even
    with a desired capacity set to 3, the number of instances in a
    group will fluctuate based on triggers. For example, this group
    might have scaled up to 5 and still be in the process of scaling
    back down to the new desired capacity of 3. Because of this
    uncertainty, the correct answer is option D.

  4. B, C. Launch templates can only be created from scratch or
    from launch configurations, and not from an EC2 instance, so
    option A is incorrect. (You can copy parameters from an
    instance but not create a template directly from an instance.)
    Templates do allow for versioning and slight variations in
    copies (option B), as well as for using both on-demand and spot
    instances (option C). They do not, however, allow multiple
    versions to be assigned to the same group, as is the case with
    launch configurations.

  5. C. This is the type of question you can only hope for on an
    exam; it's basic and direct, as well as simple. The only
    parameter that would change automatically is Desired Capacity.
    In this case, it would presumably increase if network saturation
    occurs to provide an additional instance (or more) in times of
    peak traffic.

  6. D. Your launch templates do not provide a means to indicate a
    target availability zone. You can specify AZs to use in your Auto
    Scaling group, but the launch template is focused on individual
    instances to launch within the group. The group then has the
    ability to place those instances in appropriate AZs.

  7. D. All parameters in a launch template are optional. Although
    it would be unusual and arguably not that helpful to have a
    launch template with no AMI ID or key pair (for example), it's
    allowed by AWS.

  8. D. As a general principle, the larger an Auto Scaling group, the
    less effective a static scaling policy turns out to be. Imagine
    adding a single instance to a fleet of 50 and expecting anything
    but fairly marginal results! In cases where you have large
    instance counts, PercentChangeInCapacity is often the most
    effective approach as it can proportionally scale. In the event
    that using a percentage isn't an option, the next best option is
    likely using ChangeInCapacity with a higher number or setting
    up scaled policies for different tiers of change.

  9. C. First, eliminate option D; unless the new instances were
    launched mere seconds ago, this is not the best answer. The
    other options all propose a common situation: something has
    changed related to the new instances as compared to the ones
    that are working correctly. A keypair (option A) might affect
    SSH access, but not web access. A different availability zone
    should not affect access, as the Auto Scaling group and load
    balancer should automatically handle this. Option C, however,
    is valid: a different security group could result in web traffic
    being disallowed in, causing a lack of connectivity.

  10. A, D. The recurring scheduled surge of activity makes selecting
    option A a good first choice. Knowing ahead of time that
    activity increases at 4 and decreases at 8 means you can adjust
    the desired capacity of the group accordingly. Option D is also
    correct, although a bit trickier. Over 4 hours, if the maximum of
    the group were sufficient to handle the traffic, there would be

    problems only in the first chunk of access time (perhaps 4 to
    4:30). You would then expect enough additional instances to
    have launched to resolve any problems. That problems persist
    until demand decreases at 8 suggests that the group never
    launches enough instances. This is a case where the maximum
    value should be tweaked to account for this.

  11. B. By default, an EC2 Auto Scaling group has a cooldown
    period of 300 seconds, or 5 minutes.

  12. B. All of these options are possible with both launch templates
    and launch configurations except for option B. Only a launch
    template can be versioned.

  13. A, C. Long cooldown periods (option A) can result in instances
    not being started quickly enough to meet demand. Additionally,
    a scaling event might occur but the step size is not large enough
    (option C), meaning that multiple scaling events—each with
    instance startup and cooldown periods involved—must occur to
    quickly scale out.

  14. A, C. Only launch templates allow you to use spot instances
    along on-demand instances, whereas launch configurations
    allow just on-demand instances. Additionally, T2 instances can
    only be used with launch templates.

  15. D. Auto Scaling groups do not restart failed instances (option
    D). Instead, if an instance fails its health check, a new instance
    is started up (option C).

  16. C. Health checks begin on a new instance as soon as it enters
    the InService state. This ensures that the instance is fully
    capable of responding to the health check prior to that check
    being executed.

  17. C. The most likely answer here, given that health checks are
    passing, is a spot price change that causes a spot instance to
    terminate. This occurs to a spot instance regardless of whether
    or not it is in an Auto Scaling group.

  18. A, B. Whenever an instance is moved into a Standby state, the
    Auto Scaling group assumes this change was intentional. It
    therefore stops health checks and reduces desired capacity by 1
    until the instance is put back into the InService state.

  19. C. The first criterion for termination of an instance is the
    number of instances in an availability zone. Since zone 3 has
    the most instances, it will be the zone from which an instance
    is terminated. Then, the regular priority is followed, as listed in
    option C.

  20. B, D. Both options B and D reflect termination policies that are
    specific to certain types of instances. Option B works only if you
    have instances with launch templates (which is not required),
    and option D works only if you are using an allocation strategy
    to mix spot and on-demand instances.

Chapter 8: Hubs, Spokes, and Bastion Hosts

  1. B. VPC peering connections always begin with pcx, then a dash,
    and then a random string of numbers. The only connection
    name here that matches this format is option B.

  2. A. A bastion host is a host that is outside of a private VPC, and
    it provides access to the resources within the VPC (option A). It
    does not assign any IP addresses but does itself have a public IP
    address (which often is elastic).

  3. A, D. Bastion hosts should be as secure as possible. Of the
    options provided, using multifactor authentication and
    whitelisting addresses are the only two that are valid. Bastion
    hosts typically aren't accessed on port 80, so option B does not
    make sense in this context. Option C is not helpful as more
    than just administrators would need to access the bastion host.

  4. A. VPC peering can save costs by preventing egress (option A).
    Data that moves between two peered VPCs will not be egressed
    and will instead flow across the AWS network, reducing overall
    egress costs.

  5. B. VPCs can be peered across regions, whether or not the two
    VPCs are in the same account (option B).

  6. A, C. This is simply a case of rote memorization, unfortunately.
    Interregion VPC peering doesn't support jumbo frames (option

    C) or IPv6 traffic (option A).

  7. B, D. Bastion hosts must be accessible from the Internet to be
    useful. This requires that they exist in a public subnet (option

    B) and have a public IP address (option D). Though it is
    common for a bastion host to have an elastic IP address (option
    C), it is not a requirement.

  8. C. Peered VPCs cannot have clashes in their IP addresses,
    which means nonoverlapping CIDR blocks.

  9. A, D. Bastion hosts are typically secured using a variety of
    mechanisms, especially security groups (option A). They should
    also be in Auto Scaling groups to ensure they are always
    available when needed (option D).

  10. A. AWS does not allow transitive routing, which is traffic
    flowing from one VPC peered to another VPC, and then from
    that VPC to a third peered VPC.

  11. D. The key here is that this question represents two different
    transmissions. The first, from VPC B to VPC A, is allowed, and
    the second from VPC A to VPC C, is also allowed. This would
    only be disallowed if traffic were directed to flow from VPC B
    directly to VPC C.

  12. D. This is a bit tricky but raises a good test-taking tip: if you are
    asked about limits and an answer provides a default limit but
    says that default can be raised, that is likely the correct answer.
    In this case, that answer is option D.

  13. D. VPC peering connections do not require any hardware to set
    up or run.

  14. A. Bastion hosts and NAT devices are quite similar, and the
    core difference is in the direction that traffic flows. Bastion
    hosts allows traffic into private resources from the Internet
    whereas NAT devices allow private resources to access out to
    the Internet.

  15. C. Neither option A nor B helps you secure or otherwise
    improve the network you've inherited. Of options C and D, both
    are valuable, but C provides security and should be done before
    adding logging (another important step).

  16. A, D. Bastion hosts are not for web access (options B, C) but
    instead for direct access, typically through SSH (option A)
    and/or RDP (option D).

  17. D. Edge-to-edge routing is the exact scenario described in this
    question: there are two peered VPCs and one of those VPCs also

    connects to an additional network. Routing is not allowed in
    AWS from one “edge” (the private additional network) through
    a middle VPC to the peered VPC.

  18. B. In a hub-and-spoke model, you have one central VPC that all
    other VPCs are peered with. This means that for a model with
    nVPCs, you'd have (n-1) peering connections. In this case, with
    five total VPCs, you'd expect four of those to have peering
    connections with the central VPC.

  19. B. This one takes some careful reading and might even be
    worth diagramming. Only option B provides a working, legal
    AWS solution, though: logs are moved to VPC A from both B
    and C, each with their own VPC peering connections, as
    transitive routing is disallowed. Then VPC D has its own
    peering to A for loading data. This is actually a classic hub-and-
    spoke model using a VPC (A in this case) as a shared services
    VPC for log aggregation.

  20. A. This one isn't hard in concept but takes some very careful
    reading. You want to route anything that has a destination IP
    address within a peered VPC through the peering connection
    with that VPC. In this question, the only answer that matches
    that is option A.

Chapter 9: AWS Systems Manager

  1. D. While AWS Systems Manager does prevent many critical
    vulnerabilities through patching, it is not itself a service for
    alerting users to critical vulnerabilities.

  2. D. All AMIs that have Windows or Linux from the Amazon
    marketplace will have the AWS Systems Manager preinstalled.
    Anything using a different operating system (such as macOS)
    or from a third party will need AWS Systems Manager installed.

  3. B. Any instance running an SSM agent will need to assume an
    IAM role for connecting to the AWS Systems Manager service
    (option B). There is no such policy as AWSSystemsManager
    (option C).

  4. A. This requires pure rote memorization. The name of the
    policy is AmazonEC2RoleforSSM.

  5. A, B. Only AWS instances, on-premises instances, or in some
    cases other cloud provider instances can be managed by AWS
    Systems Manager. It cannot manage containers or Lambda
    functions.

  6. A, D. You can create resource groups using tags (option A),
    which in turn implies you can use a tag to indicate
    environment, application, and so forth (option D). You cannot
    create resource groups based on IAM roles or account numbers.

  7. C. Resource groups can filter resources based on tag or
    environment, and they can query based on tags as well.
    However, they cannot span multiple regions.

  8. A, C. AWS Systems Manager supports command, policy, and
    automation documents.

  9. A, B. AWS Systems Manager supports documents in JSON and
    YAML.

  10. D. All of these document types can interact with State
    Manager.

  11. A. The only one of these that is an actual command is the Run
    command (option A), which is what command documents
    interact with.

  12. B. AWS KMS is the only encryption protocol supported by
    Session Manager.

  13. C, D. State Manager is aimed at compliance, which can in turn
    help provide useful security measures on your instances.

  14. B, D. Both AWS CodeBuild and AWS CodeDeploy can work
    with the Parameter Store.

  15. B. A patch baseline stores the patches that will be
    automatically deployed to your instances. If you want to avoid a
    certain patch, simply remove it from the baseline.

  16. A, B. During a maintenance window, you can update patches,
    run PowerShell commands, execute Lambda and step
    functions, and build AMIs. You cannot remove patches or
    restart an instance.

  17. D. AWS Systems Manager documents can be used cross-
    platform without any changes (option D).

  18. D. No action is required here because the AWS Systems
    Manager is already open source, and its code is available on
    GitHub. Note that option C is incorrect because the Systems
    Manager agent comes preinstalled only with Linux and
    Windows AMIs (not macOS) and only if those AMIs come from
    the Amazon Marketplace.

  19. B. The Run command allows you to execute scripts and other
    commands on instances. In this case, a Run command could
    execute the compliance script needed.

  20. B, C. You can change the default patching behavior either by
    writing an automation document or by writing your own AWS
    Systems Manager command (options B and C).

Chapter 10: Amazon Simple Storage Service
(S3)

  1. D. S3 allows file uploads up to 5 TB, so none of the issues are
    related to file size limits (options B, C). Instead, the Multipart
    Upload option will upload larger files—AWS recommends
    anything larger than 100 MB—in multiple parts and will often
    resolve the issue.

  2. A. This is another question that is tricky unless you work
    through each part of the URL, piece by piece. The first clue is
    that this is a website hosted on S3, as opposed to directly
    accessing an S3 bucket. Where website hosting is concerned,
    the bucket name is part of the fully qualified domain name
    (FQDN); where direct bucket access is concerned, the bucket
    name comes after the FQDN. This is an essential distinction.
    This means that options B and C are invalid. Then, you need to
    recall that the S3-website portion of the FQDN is always
    connected to the region; in other words, it is not a subdomain.
    The only choice where this is the case is option A.

  3. C, D. PUTs of new objects have a read after write consistency.
    DELETEs and overwrite PUTs have eventual consistency across
    S3.

  4. C. First, note that “on standard class S3” is a red herring and
    irrelevant to the question. Second, objects on S3 can be 0 bytes.
    This is equivalent to using touch on a file and then uploading
    that 0-byte file to S3.

  5. C. This is a matter of carefully looking at each URL. Bucket
    names—when not used as a website—always come after the
    fully qualified domain name (FQDN); in other words, after the
    forward slash. That eliminates option A. Additionally, the
    region always comes earlier in the FQDN than
    amazonaws.com,
    eliminating option D. This leaves options C and B. Of the two,
    option C correctly has the complete region, us-east-2.

  6. B. The key here is the phrase “usually accessed multiple
    times.” You really want a blending of S3 standard (most
    accessible but also highest cost) and S3-IA (documents
    accessed less frequently and cheaper). Intelligent tiering
    (option B) provides for this; it will move documents into S3-IA
    when not accessed, but then when accessed, they are moved
    back to standard (and located there for additional accesses).

  7. A. S3 Standard provides 99.99 percent availability.

  8. D. All S3 storage classes provide the same durability: eleven 9s,
    or 99.999999999 percent.

  9. C. S3 One Zone-IA provides 99.5 percent availability.

  10. D. All S3 storage classes with the exception of S3 One Zone-IA
    store data in at least three availability zones, and often more
    (depending on the region and AZ availability).

  11. A. When a new S3 bucket is created, only the bucket creator
    can access that bucket and its resources.

  12. A, D. There are four ways to control access: IAM policies
    (option A), bucket policies, access control lists (option D), and
    query string authentication.

  13. B, C. SSE-IAM (option A) and Amazon Client Encryption
    Toolkit are not valid Amazon or AWS tools or services. SSE-S3
    and SSE-KMS are, and both are available for encryption.

  14. B. The Amazon S3 Encryption Client gives you complete
    control over your keys.

  15. A, C. Amazon Glacier Deep Archive is both less expensive than
    standard Glacier and also provides fewer access options.

  16. B, C. S3 Intelligent-Tiering is ideal for unknown or changing
    access patterns, as it will adjust the location of files based on
    usage between S3 Standard and S3 Standard-IA.

  17. A, D. Remember that all S3 storage classes share the same
    durability; this means that option A is true. Then, you need to

    know that availability decreases moving from S3 Standard to S3
    Standard-IA to S3 One-Zone IA. This means that options B and
    C are false and option D is true.

  18. A. Although S3 Intelligent-Tiering moves data between S3
    Standard and S3 Standard-IA, its performance is identical to S3
    Standard.

  19. B. S3 Intelligent-Tiering provides 99.9 percent availability.

  20. B. This is pretty straightforward. Since you do not want to
    move the data out of Glacier, turning on Expedited retrieval is
    the fastest way to access the data.

    Chapter 11: Elastic Block Store (EBS)

    1. B. IOPS stands for input/output operations per second.

    2. B. Provisioned IOPS SSD supports 32,000 IOPS, far more than
      any other volume type.

    3. D. All EBS volume types can be as large as 16 tebibytes.

    4. A. General-purpose SSDs are ideal for general usage, including
      a system boot volume.

    5. C. A throughput-optimized HDD is perfect for data
      warehouses, since the workload needs to consistently stream
      and process large data sets.

    6. B. A database workload will need to support a lot of IOPS, and
      a provisioned IOPS SSD is the best choice for these types of
      workloads.

    7. C, D. Neither a throughput-optimized HDD nor a cold HDD can
      be selected as boot volumes.

    8. A. Default volumes created through the console are general-
      purpose SSDs.

    9. A. Only the two SSD types can be bootable (options A and B).
      Of those two types, the general-purpose SSD is the cheaper
      option.

    10. A. Default volumes created through the console are general-
      purpose SSDs.

    11. A, B. Snapshots of EBS volumes are both incremental (option

      1. and stored on S3 (option B). However, they are accessible
        only through the EC2 API—not the S3 API—and they are taken
        while the volume is running, not unmounted.

    12. B. You can always create snapshots from encrypted volumes,
      and those snapshots will also be encrypted.

    13. C. Unencrypted snapshots can be encrypted by copying them in
      the AWS console and selecting the option to encrypt the copy.

    14. C. The only reason a snapshot would not contain all of the data
      from an application using the volume would be if the
      application or the operating system of the application was
      caching content. All of the other options are incorrect; volumes
      and instances do not need to be unmounted or stopped,
      respectively, and option D is completely made up.

    15. B. Encryption keys are always unique 256-bit AES keys.

    16. A, C. You can either copy the unencrypted snapshot to an
      encrypted snapshot and then launch a new instance from that
      (option C), or you can select the option to encrypt the instance
      at creation time (option A).

    17. A. For any EBS volume that is set to persist beyond the lifetime
      of an EC2 instance, the data on that volume will stay, regardless
      of the state of the instance.

    18. D. Root volumes by default will delete on termination of the
      attached instance. However, by setting the Delete on
      Termination flag to No, you can prevent this behavior and
      maintain the data on that volume past the life of the instance.

    19. D. You can always change the volume type of a running volume
      with the console, API, or CLI.

    20. C. Somewhat surprisingly, AWS states that snapshots of any
      volume size—from 1 TB to 16 TB—should take the same
      amount of time, on average. There can be minor
      inconsistencies, but in general, all snapshots are designed to
      take the same amount of time.

Chapter 12: Amazon Machine Image (AMI)

  1. C. AMIs can be public, private, or shared. There is no protected
    accessibility level.

  2. C. AMIs are available only in a single region. However, they
    can be copied to other regions (option C). In this question,
    then, the desired AMI simply needs to be copied from US-West-
    1 to US-East-2, and then it can be used.

  3. A, C. AMIs are available through Amazon via AWS, through the
    AWS Marketplace (option C), through the AWS community,
    and by creating one from an instance (option A). There is no
    such thing as the Global AMI Marketplace (option B), and
    vendors make their AMIs available through AWS, not an
    external GitHub repository (option D).

  4. A, D. AMIs can be either instance-backed or EBS-backed. There
    is no such thing as a volume-backed AMI or an EMS-backed
    AMI.

  5. A. Shared AMIs are available for broad use, but permissions to
    use the AMI must be granted by the owner of the AMI.

  6. A, B. Private AMIs cannot be shared across accounts. You
    would need to convert the AMI to a shared AMI (option B) and
    then, as the owner of the AMI, grant permissions to your
    coworker to use that AMI (option A).

  7. A. If you expect workloads to be short-lived—such as in a
    volatile Auto Scaling group as described in the question—then
    an instance-backed AMI is your best choice. EBS-backed AMIs
    are more suitable for preserving data for longer periods of time,
    and there is no such thing as a transient-backed AMI.

  8. B. EBS-backed AMIs are ideal for longer-lived jobs. The only
    short-lived instance in the list of answers is a container-based
    application (option B), so that would be a poor candidate for an
    EBS-backed AMI.

  9. C. Only accounts in which an AMI is launched are billed
    (option C), regardless of the creator or owner of the AMI.

  10. B, D. You can copy an AMI to a new region, but the resulting
    AMI is both distinct from the source AMI (option B) and has its
    own unique identifier (option D).

  11. D. A deregistered AMI cannot be used to start an instance. You
    can, however, register a new AMI from an EBS snapshot.

  12. 12.

  13. A, C. EBS-backed AMIs can be encrypted using a KMS
    customer master key or a customer managed key that you
    specify.

  14. B. The action to launch an EC2 instance from an AMI is called
    RunInstances.

  15. C. This is a bit tricky and must be memorized. New instances
    are, unless otherwise specified, set to use the encryption state
    of the AMI's source snapshot. This preserves the encryption
    from AMI to instance.

  16. A, B. You can both set encryption by default (option A) and
    supply encryption instructions at instance launch (option B).
    Although you can encrypt an instance after launch, that does
    not satisfy the question's requirement to keep the instance
    encrypted at all times, and using a different AMI is not a valid
    option.

  17. B. Amazon images are easily distinguished because they
    consistently use amazon as an owner in the account field.

  18. B. You can easily share an AMI with other AWS accounts by
    adding the account IDs to the AMI's permissions. You do not
    need to make the AMI public to accomplish this.

  19. D. There is no limit to the number of AWS accounts with
    which an AMI can be shared and used.

  20. D. AWS actually doesn't copy launch permissions, user-defined
    tags, or S3 bucket permissions when an AMI is copied from one
    region to another. All of these must be re-created on the new
    AMI.

  21. B. When an AMI is copied to a new account, a duplicate of that
    AMI is created in the new account. The new AMI is owned by
    the owner of the new account, which in this case is your
    coworker.

Chapter 13: IAM

  1. C. Users of AWS are responsible for security in the cloud,
    whereas AWS is responsible for security of the cloud.

  2. A, D. AWS is responsible for security of the cloud, meaning
    that they maintain and secure the physical servers and actual
    networking equipment within AWS. Individual users must
    handle application security as well as network port
    configuration (this latter is typically accomplished through
    network ACLs and security groups).

  3. A, B. Users of AWS are responsible for security in the cloud,
    which in this case would include the operating system of any
    EC2 instances as well as encrypting (or choosing to encrypt)
    data. AWS manages RDS instance security and operating
    systems as well as their physical datacenters.

  4. C. Shared responsibility indicates that both the user and AWS
    has some significant responsibility. In the case of EC2
    instances (option C), AWS patches and maintains the host EC2
    instances, whereas users maintain and patch the operating
    system running on those hosts.

  5. B. When using AWS-provided encryption options such as SSE-
    S3 and SSE-KMS, AWS handles the actual encryption process.
    However, the user must specify what is to be encrypted,
    resulting in a shared responsibility.

  6. B, C. The two types of users in an AWS account are the root
    user and IAM users. There can be only one root user but as
    many IAM users as desired.

  7. C. The principle of least privilege means that users have only
    enough permissions to do their job. Although options A and B
    are valid principles in a solid AWS IAM setup, they do not
    define the principle of least privilege.

  8. A, B. Users can identify themselves through a username
    (option A) for the web console and through an access key
    (option B) for the AWS API and SDK.

  9. D. Key pairs are created primarily for access to AWS resources,
    specifically an EC2 instance. Access to the web console is
    through a username and password, and access to the CLI and
    SDK is through an access key.

  10. C. For accessing running EC2 instances, you'll need a valid key
    pair. Using this key pair, you can use SSH or RDP to access and
    authenticate running instances.

  11. C. Unlike with a group, permissions granted through a role are
    temporary for a user.

  12. B. EC2 instances cannot be assigned group membership and
    can only be assigned policies through IAM roles. An IAM policy
    can be directly assigned to an instance, though.

  13. D. IAM policies can be assigned to user, groups, and roles.

  14. A, D. AWS recommends the use of managed policies (option D)
    rather than inline policies, because managed policies are
    defined once and can be assigned to multiple users, groups,
    and/or roles (option A).

  15. D. The version of a policy references the language used in the
    policy (option D), rather than anything related to the specific
    policy or policy author.

  16. A, B. Valid policies have versions, statements, sids (option B),
    effects (option A), principals, actions, resources, and
    conditions. They do not have ids or affects.

  17. B, D. The principal indicated in a policy should reference an
    IAM user (option B), role, or federated user (option D), and
    provide access to resources for that user.

  18. A, B. Passwords both expire and are subject to password
    policies set in the AWS Console or otherwise. Access keys, on

    the other hand, are long-lived (option A) and are not governed
    by password policies (option B). This makes them potentially
    more dangerous if care is not taken to regulate and control
    them.

  19. C, D. The term access key in AWS parlance refers to both an
    access key ID and a secret access key. The two as a pair provide
    programmatic access to the AWS CLI and SDK.

  20. A, C. You can import your own keys into AWS KMS (option C)
    or allow AWS KMS to create keys for you (option A).

Chapter 14: Reporting and Logging

  1. D. Monitoring and reporting in AWS provides information that
    can be used in security, compliance, and performance. All are
    equally important in specific contexts, so the best answer here
    is option D.

  2. B. For gathering metrics, Amazon CloudWatch (option B) is
    the best choice. AWS Config gathers information on
    configuration and compliance, and AWS CloudTrail monitors
    API calls.

  3. C. AWS CloudTrail provides insight into API calls, and a client
    interacting with a REST API is exactly that.

  4. B. The Amazon CloudWatch Logs Agent, when installed on an
    instance, provides metrics not available in any other manner,
    including using the basic Amazon CloudWatch capabilities.

  5. B. AWS CloudTrail maintains collected data on API calls for 90
    days by default, although this setting can be changed.

  6. D. AWS CloudTrail will collect information on any API call
    made—even between AWS services, such as in option C—within
    AWS. The only option that is not an API call is a login to the
    console (option D). That information is collected, but not by
    AWS CloudTrail.

  7. B, D. AWS CloudTrail trails apply to a single region by default
    (option D) but can be applied to all regions (meaning options A
    and C are both false). They also collect both management and
    data events (option B).

  8. C. Management events in AWS CloudTrail relate to security,
    registering devices, configuring security rules, routing, and
    setting up logging. In the options, this would include A, B, and

    D. Option A is a security event, B is setting up a security rule for
    routing, and D is a routing data rule. Option C, on the other

    hand, is related to data and is a data event rather than a
    management event.

  9. A, C. Because data events capture the movement, creation, and
    removal of data, they are typically much higher volume than
    management events (option A). Data events are also disabled
    by default (option C), making them different from management
    events.

  10. A, C. The RunInstances and TerminateInstances events are
    considered write events. This is easiest to remember because
    they are not read events, and AWS provides only two options:
    read and write. Collecting these events, then, would require a
    trail be set to Write-Only or All (which collects all events).

  11. A. AWS CloudTrail will collect the first copy of any
    management event in a region for free. Any additional copies
    incur cost, though, as do all copies (including the first) of a data
    event.

  12. A. A single Amazon CloudWatch alarm can monitor only a
    single metric at once.

  13. D. CloudWatch alarms have three states: OK, ALARM, and
    INSUFFICIENT_DATA. INVALID_DATA is not a valid alarm
    state.

  14. A, C. In this scenario, there would need to be three out-of-
    threshold data points within the evaluation period of 10
    minutes to trigger an alarm. This means that both options A
    and C would trigger an alarm. Note that it is possible that the
    scenario in option D would trigger an alarm, depending on
    when the out-of-threshold metrics occurred (inside 10
    minutes), but it is not clear from the answer, so options A and
    C are better answers.

  15. A,C. There are four possible settings for handling missing data
    points: notBreaching (A), breaching, ignore, and missing (C).

  16. C. A log stream is a collection of events from a single source
    (option C). Option As and B describe a log group, and there is
    no CloudWatch analog for option D.

  17. A. AWS Config does not provide mediation mechanisms. You
    can write code to remediate situations that cause notifications
    via AWS Config, but the remediation capability is not a
    standard part of AWS Config itself.

  18. A, C. AWS Config will notify you if a bucket has been granted
    public access (provided you have set that baseline up in AWS
    Config). You would then need to remediate that access, and that
    would require AWS Lambda (option C).

  19. C. Configuration items do not include IAM-related information
    (option C). They do include event IDs (option A), configuration
    data about the resource, basic information about the resource
    such as tags, a map of resource relationships (option B), and
    metadata about the CI, including the version of the CI itself
    (option D).

  20. D. A change-triggered rule will be evaluated every time a
    resource is changed, meaning that it is the most immediate
    evaluation available. Periodic rules are evaluated against a
    specific schedule. Tagged and immediate evaluations are not
    actual AWS concepts.

Chapter 15: Additional Security Tools

  1. B, D. Amazon Inspector offers two types of assessments:
    network assessments and host assessments.

  2. C. Assessment templates are used by Amazon Inspector to
    determine what rules should be used in assessing and
    evaluating an environment.

  3. B. Host assessments require an agent to be installed, but
    network assessments do not.

  4. D. The Runtime Behavior Analysis package identifies risky
    behavior, including open and unused ports. Although the
    Security Best Practices package is also related to this area, it is
    the Runtime Behavior Analysis package that will identify open
    ports specifically.

  5. D. The Network Reachability rules package covers all of these
    areas, as well as security groups, NACLs, subnets, VPCs, direct
    connections, and Internet gateways.

  6. B, C. Amazon GuardDuty looks for reconnaissance, instance
    compromise, and account compromise.

  7. A, D. Vulnerability scans typically look for IP addresses,
    hostnames, open ports, and misconfigured protocols. These are
    key areas to focus on when securing your system.

  8. C. Amazon GuardDuty stores security findings in the region in
    which they apply, so with three regions, you would have three
    different sets of findings.

  9. B. In a multi-account setup, findings remain in individual
    accounts but are aggregated into the master account as well.

  10. A, B. Amazon GuardDuty analyzes AWS CloudTrail, VPC flow
    logs, and AWS DNS logs.

  11. A, C. Security findings are maintained by region. To aggregate
    findings across regions, you'd need to use AWS CloudWatch

    events and push findings to a common data store, like Amazon
    S3. You can then use those findings—now in a single S3 bucket

    —however you like.

  12. D. Amazon GuardDuty analyzes AWS CloudTrail, VPC flow
    logs, and AWS DNS logs. It does not offer analysis of EC2
    instance logs directly (although some of that data is available
    through flow logs).

  13. C. Amazon GuardDuty is not a log storage service and does not
    offer options for retaining logs.

  14. C. You can both suspend and disable the GuardDuty service.
    However, only disabling the service will result in findings and
    configurations being deleted.

  15. A, C. Amazon GuardDuty delivers findings to two places: the
    GuardDuty console and AWS CloudWatch events. There is no
    such thing as an Amazon GuardDuty CLI, and Amazon
    Inspector does not have access to GuardDuty findings.

  16. A. Amazon GuardDuty threat intelligence stores IP addresses
    (as well as domains) that are known to be used by malicious
    attackers on the Internet.

  17. B. You can run network assessments without host access.
    However, you cannot run host assessments without installing
    the Amazon Inspector agent on the hosts, which would require
    host access.

  18. A. You can set Amazon CloudWatch Events to monitor scaling
    events, and then launch an assessment based on that event.

  19. D. Amazon Inspector offers four severity levels: High, Medium,
    Low, and Informational.

  20. A. Metrics are published to Amazon CloudWatch via Amazon
    Inspector.

Chapter 16: Virtual Private Cloud

  1. A, D. AWS does not provide support for IPv6 NAT devices,
    including NAT instances (option A) and NAT gateways (option
    D).

  2. C. This could be memorized, and /16 turns out to be a common
    CIDR block mask (along with /24). However, you could also
    start with /32 (a single IP) and just double the number as you
    go from /32 to /31 to /30, all the way to /16. So a /24 has 256 IP
    addresses, /20 has 4096, all the way up to /16 with 65,536
    addresses (option C).

  3. D. The key here is that you need 16 usable IP addresses.
    However, AWS will use the first and last address of any given
    network range. (Technically, AWS reserves the right to use
    those IPs and doesn't always take advantage of that.) Therefore,

    /28, which has 16 addresses, only provides 14 usable addresses.
    The next size up would be /27 (option D), which is correct in
    this case.

  4. C. The number after the slash in CIDR notation provides the
    number of bits available for the network address; the remaining
    number of bits for the host address is 32 minus the bits already
    used. So here, the bits available for the host address would be
    32 – 20 (in /20), so 12 bits (option C).

  5. A, D. Any instance responding to IPv6 requests should have an
    IPv6 address and reside within a VPC with IPv6 addresses
    available through a CIDR block. So you need a CIDR block
    assigned with the VPC (option A) and an IPv6 address assigned
    to the instance (option D).

  6. C. All IPv6 CIDR blocks in AWS are /56.

  7. D. You cannot select specific IPv6 addresses when using IPv6
    within AWS. Addresses are instead automatically allocated
    from Amazon's pool of IPv6 addresses.

  8. D. You cannot select specific IPv6 addresses when using IPv6
    within AWS. Addresses are instead automatically allocated
    from Amazon's pool of IPv6 addresses.

  9. B. AWS restricts VPCs to using a /16 netmask, resulting in
    65,536 IP addresses.

  10. B. This question provides somewhat limited information, but it
    does give you everything you need to work this problem. First,
    there are nine applications, and each has three environments.
    That means you'll need 27 application environments (since they
    can't be mixed). But you can share VPCs and subnets, it
    appears; three applications can exist within each VPC, and there
    does not appear to be a restriction against sharing space within
    the same environment. That means the 27 applications can be
    reduced to nine “logical blocks.” But wait—each application
    needs a private subnet and a public one. This means that you'll
    need 18 subnets total: nine application subnets with a public
    and a private component each.

  11. A, D. Public subnets must have a route to an Internet gateway
    (option D), and that gateway must be attached to the VPC in
    which the subnet exists (option A).

  12. B, D. Egress-only Internet gateways are only required when
    you have IPv6 addresses (option D) and hosts with those
    addresses are in private subnets that need to access the
    Internet (option B). This is because IPv6 addresses are not able
    to use NAT devices to connect to the Internet.

  13. B. Traffic from private instances should flow from the private
    instance to a NAT device, which then routes traffic to an
    Internet gateway and finally out to the Internet (option B).

  14. A. In almost every scenario where a private instance needs to
    access the Internet, a NAT gateway is preferred by AWS as it is
    managed. However, in situations where you might have
    extremely high bandwidth requirements—which is the case in

    this question—a NAT instance is better as it allows for
    customized sizing and management.

  15. C. In general, this is a case for a VPC endpoint. Both options B
    and C are types of VPC endpoints, but S3 requires a gateway
    endpoint (option C), rather than an interface endpoint, and is
    therefore correct.

  16. B. For most services, an interface endpoint is the correct type
    of VPC endpoint to use. However, for Amazon S3 or Amazon
    DynamoDB you'd need to use a gateway endpoint. That makes
    option B correct here.

  17. C, D. VPN tunnels in AWS require a virtual private gateway
    (option D) and a customer gateway (option C).

  18. C. This question is not as hard as it looks, as many of the
    answers are technically incorrect. If you are allowing resources
    from another subnet and want to retain the security of those
    resources, you can chain security groups and simply use
    another security group as the source for traffic (option C).

  19. C. Be careful here! Although AWS typically rearranges NACL
    rules to order them from low to high in a top-to-bottom visual
    sense, it is the rule number that matters, not the “position” in
    the NACL table. NACLs are evaluated from the lowest-
    numbered rule to the highest.

  20. D. The default NACL always has a rule numbered 100 that
    allows in all inbound traffic. You need to counteract this by
    either removing it or adding a rule—which the question
    indicates has been done—but also ensuring that rule is
    numbered lower than rule 100 to take precedence.

Chapter 17: Route 53

  1. B. DNS operates over port 53 (B), which is actually far more
    important to know than that it also is the source of the naming
    of the Route 53 service in AWS.

  2. D. Although Route 53 does support text records, the record
    type is TXT, not TEXT, so D is incorrect. Route53 does support
    NAPTR, NS, and SPF records.

  3. A, B. You will need an A record to map an incoming hostname
    (like
    wisdompetmedicine.com) to an S3 bucket. You will also
    need a CNAME record to map a subdomain, like
    www.wisdompetmedicine.com, to the bare domain name.

  4. A. You would need an AAAA recordset because this is an IPv6
    address. A records point domain names to IPv4 addresses, and
    AAAA records point domain names to IPv6 addresses.

  5. C. Whenever you need to associate a domain name with an
    AWS service—such as CloudFront, S3, or a VPC endpoint—you
    have to use an Alias record rather than an A or AAAA record.
    This is because most AWS services do not expose static IP
    addresses, which an A record expects.

  6. B. This is a textbook case for a failover routing policy. If traffic
    cannot reach a primary instance or service, Route 53 will “fail
    over” routing to a backup or secondary instance.

  7. D. When you have multiple hosts that can respond to traffic
    and are only concerned about the health of the hosts, you can
    use a multivalue answer policy. In this case, you'd point the
    responses at the various Application Load Balancers (ALBs).

  8. C. This should be a pretty easy one: latency routing policies
    return responses to users based on network latency.

  9. D. All of the options allow for multiple hosts. It is easy to
    forget that a simple routing policy allows multiple hosts to be

    entered; it simply returns responses randomly, without any of
    the logic applied for most policies.

  10. C. This is a good use case for weight routing. You can send (for
    example) 10 percent of traffic to the new site and the remaining
    traffic to the existing site using weighting values.

  11. C. Numbers in a weighted routing policy indicate the
    percentage of traffic to route to that host, in relation to the sum
    of all the weight numbers. In this case, option C would add up
    in total to 50, so you'd double each value to get its percentage of
    traffic: 20 percent for host 1, 50 percent for host 2, and 30
    percent for host 3. This is the requirement in the question, so it
    is the correct answer.

  12. A. Route 53 uses VPCs to manage privately hosted zones, so
    you are required to use VPCs with private DNS. Option A, then,
    is not possible. Private DNS does support exposing records to
    other VPCs, regions, and accounts.

  13. C, D. Private DNS has very few limitations and, in most cases,
    can do everything a publicly hosted zone can. However, health
    checks are not possible on instances that expose only private IP
    addresses, and you cannot expose a private record to the
    Internet under any circumstances.

  14. B, C. For Amazon Route 53 Traffic Flow to work, you'll need
    both a traffic policy (option B) and a policy record (option C).
    The traffic policy is the rules to define how traffic should flow,
    and a policy record connects that traffic policy to an
    application's DNS name.

  15. A, C. If you want to point one DNS name at another DNS name,
    you typically use a CNAME (option A). This would only be a
    problem if the CNAME was intended to receive requests for a
    zone apex record (like
    example.com) rather than a subdomain
    (like
    www.example.com) and redirect them. You can also use
    AWS Alias records to point requests to an existing domain with
    policies already set up (option C).

  16. C. You can set up health checks in Amazon Route 53 to check
    an endpoint, other health checks already set up, or alarms in
    CloudWatch. You cannot directly monitor via CloudTrail
    (option C), although you could monitor an alarm that was
    triggered by a CloudTrail event.

  17. A, B. Amazon Route 53 will stop sending requests to failing
    hosts and will also resend requests when that host responds as
    healthy again (options A and B). Although retries and alarms in
    CloudWatch can be set up, they are not by default, so both
    option C and option D are incorrect.

  18. B. Latency-based policies are focused on latency (as the name
    implies). This does not always translate to the closest region to
    the requestor, as some regions may be closer to the requestor
    but have longer lag times.

  19. A. A geoproximity policy, like a geolocation policy, routes users
    to the closest geographical region. This means that options B
    and C are incorrect, as they are common to both types of
    routing policy. Option D would imply the use of latency-based
    routing, leaving only option A. This is the purpose of a
    geoproximity policy: you can apply a bias to send more or less
    traffic to a certain region.

  20. B, C. Health checks are not always turned on in Amazon Route
    53 (and generally are not by default), so that's the first thing to
    check (option B). All policies can use health checks, so option A
    is incorrect, and an ALB is not required to use health checks,
    making D incorrect as well. It takes three successive failures of
    a health check by default to take a host out of commission, so
    option C is also a possible answer.

Chapter 18: CloudFormation

  1. A, B. AWS provides services like CloudFormation that allow for
    capturing environments, although in JSON and YAML rather
    than XML (so option D is incorrect). This does allow
    deployments to be identical (option A), though, as well as
    building identical environments (option B). You can replace
    manual steps with code, but it's not JavaScript (so option C is
    incorrect).

  2. C. CloudFormation uses JSON and YAML for actual notation,
    so options A and D are associated (and not correct choices). Of
    the two remaining options, CloudFormation does often use the
    AWS API, but not the AWS SDK, for interaction. So option C is
    the choice that is not associated with CloudFormation.

  3. C. AWSTemplateFormatVersion indicates the version of the
    template—and therefore what its capabilities are—by indicating
    the date associated with that version.

  4. C. CloudFormation templates allow for all the provided
    answers, but they require only a Resources component to be
    valid.

  5. B. The Parameters section in a template provides for indication
    of values used throughout the rest of the template.

  6. B, D. You can assign a resource a logical name in a
    CloudFormation template (option B) but not an actual AWS-
    specific name (so option D is also true). AWS then maps your
    logical names to the actual AWS resource names.

  7. C. While you can separate resources using names or prefixes,
    the AWS-recommended approach is to use tagging (option C).

  8. A. CloudFormation provides an Automatic Rollback On Error
    option that will cause all AWS resources created to be deleted if
    the entire stack doesn't complete successfully.

  9. B. You can use CloudFormation's WaitCondition resource to
    act as a block of further action until a signal is received from
    your application (in this case, when the instance scripts finish
    running).

  10. D. CloudFormation allows for creation of all these resource
    types (and quite a few more).

  11. D. You can use the AWS CLI, API, SDK, and the console to
    execute CloudFormation stacks.

  12. A. You create CloudFormation templates and indicate what
    should occur. Instances are then specific runs of those
    templates.

  13. C. Parameters can be lists, comma-delimited lists, numbers,
    and strings. They cannot be arrays (option C).

  14. D. CIDR blocks come in specific patterns, and therefore you
    should use AllowedPattern to ensure they are properly
    supplied.

  15. B. A stack in AWS terminology is the set of AWS resources
    created and managed by a CloudFormation template.

  16. A. You can mark parameters as NoEcho to ensure that a certain
    parameter value is not shown as the template executes.

  17. B. The URL to a web application created by a stack is an output
    value. One way to think of this is to see that the value cannot be
    created until the stack runs.

  18. A. This would be an input value, as it is something user-
    supplied and required by the template at runtime.

  19. C. Here, you really don't want to use a template parameter.
    Instead, it's better to have CloudFormation look up the AMI
    name and location through a lookup table that would always
    hold current values.

  20. A. Template parameters are the preferred way to allow for user
    input during stack creation.

Chapter 19: Elastic Beanstalk

  1. A, C. While all of these concepts are supported by Elastic
    Beanstalk, AWS specifically calls out single-instance
    deployment (A) and load balancer and Auto Scaling group (C)
    as supported models. Elastic Beanstalk also supports an Auto
    Scaling group–only model.

  2. C. The load balancer and Auto Scaling group model is ideal for
    production (because of the scalability) and for web-based
    environments, because multiple requests can be distributed
    across multiple hosts.

  3. D. It's important to note that AWS considers load-balanced
    environments as ideal for web-based instances, but not
    necessarily for databases or backend services. This is because
    many databases should scale but not necessarily have a load
    balancer in front of them. For databases, you would want to use
    an auto-scaling group to allow automatic scaling, but you would
    not want a load balancer in front of the database servers. This
    sort of question can come up on the exam and is not always
    obvious to answer.

  4. A, D. platform.yaml requires three fields: a version number
    (D), a provisioner type, and a provisioner template (A).

  5. A, C. custom_platform.json is pretty straightforward in
    defining everything your custom platform needs, such as the
    AMI details (A) and custom variables (C). However, it does not
    define items that are nonstatic, such as the number of
    instances that might be used or the supported languages.

  6. A, D. Elastic Beanstalk supports a number of deployment
    models, including rolling with additional batches and
    immutable (A and D). The other two options are made-up
    terms.

  7. A, C. Both the rolling deployment and the rolling deployment
    with additional batches deployment models allow you to ensure

    your application is always running (A). But you would then use
    the additional batches option to ensure you maintain maximum
    capacity throughout the process (C).

  8. D. An immutable deployment is often slower and more
    expensive than the other models but ensures both the health
    and maximum confidence in a new deployment.

  9. C. Both versions of a rolling deployment as well as an
    immutable deployment satisfy the no-downtime requirement.
    However, the rolling deployment is the least expensive option.
    Additionally, because you have no requirement to maintain
    capacity, you can avoid the extra costs of using additional
    batches or an immutable deployment.

  10. D. All of these are configurable options for Elastic Beanstalk.
    In fact, there is very little that you cannot configure when using
    Elastic Beanstalk.

  11. A, C. Blue/green deployments require multiple environments

    (C) that can run side by side as well as Route 53 (or something
    similar) for weighted routing policies. Although you can use
    Elastic Beanstalk, it is not required, and Amazon RDS is
    unrelated.

  12. D. There is no difference in security between an Elastic
    Beanstalk environment and a manual one. In both cases, there
    are recommendations, but you ultimately must manage and set
    up security in the cloud.

  13. B, D. The two policies provided by Elastic Beanstalk are
    AWSElasticBeanstalkReadOnlyAccess and
    AWSElasticBeanstalkFullAccess.

  14. D. Elastic Beanstalk automatically creates a publicly available
    endpoint for your application in a default deployment.

  15. C. Permissions for Elastic Beanstalk are managed through
    IAM, just as all permissions in AWS are.

  16. A, D. Just as you are required to use IAM permissions for
    accessing Elastic Beanstalk along with the rest of the AWS
    platform, you use your access key (A) and secret key (D) for
    accessing the Elastic Beanstalk API in the same way you'd
    access any other AWS API.

  17. D. Elastic Beanstalk allows usage of any AWS-supported
    database.

  18. D. Elastic Beanstalk will automatically perform minor version
    updates, but you must perform any major updates to ensure
    backward compatibility and application functionality is not
    interrupted.

  19. B, C. Elastic Beanstalk automatically handles minor updates
    (A), and IAM permissions apply to all environments (D) and
    don't get “rolled out.” However, you can use a cloned
    environment to test new features (B) or a major version update
    (C).

  20. A, C. Elastic Beanstalk will store application files and server log
    files in S3.

    Review Questions 17


    Review Questions

    1. Your developers want to run fully provisioned EC2 instances to support their applica-
      tion code deployments but prefer not to have to worry about manually configuring and
      launching the necessary infrastructure. Which of the following should they use?

      1. AWS Lambda

      2. AWS Elastic Beanstalk

      3. Amazon EC2 Auto Scaling

      4. Amazon Route 53

    2. Some of your application’s end users are complaining of delays when accessing your
      resources from remote geographic locations. Which of these services would be the most
      likely to help reduce the delays?

      1. Amazon CloudFront

      2. Amazon Route 53

      3. Elastic Load Balancing

      4. Amazon Glacier

    3. Which of the following is the best use-case scenario for Elastic Block Store?

      1. You need a cheap and reliable place to store files your application can access.

      2. You need a safe place to store backup archives from your local servers.

      3. You need a source for on-demand compute cycles to meet fluctuating demand for your
        application.

      4. You need persistent storage for the filesystem run by your EC2 instance.

    4. You need to integrate your company’s local user access controls with some of your AWS
      resources. Which of the following can help you control the way your local users access your
      AWS services and administration console? (Choose two.)

      1. AWS Identity and Access Management (IAM)

      2. Key Management Service (KMS)

      3. AWS Directory Service

      4. Simple WorkFlow (SWF)

      5. Amazon Cognito

    5. The data consumed by the application you’re planning will require more speed and flex-
      ibility than you can get from a closely defined relational database structure. Which AWS
      database service should you choose?

      1. Relational Database Service (RDS)

      2. Amazon Aurora

      3. Amazon DynamoDB

      4. Key Management Service (KMS)

        18 Chapter 1 Introduction to Cloud Computing and AWS


    6. You’ve launched an EC2 application server instance in the AWS Ireland region and you
      need to access it from the web. Which of the following is the correct endpoint address that
      you should use?

      1. compute.eu-central-1.amazonaws.com

      2. ec2.eu-central-1.amazonaws.com

      3. elasticcomputecloud.eu-west-2.amazonaws.com

      4. ec2.eu-west-1.amazonaws.com

    7. When working to set up your first AWS deployment, you keep coming across the term

      availability zone. What exactly is an availability zone?

      1. An isolated physical data center within an AWS region

      2. A region containing multiple data centers

      3. A single network subnet used by resources within a single region

      4. A single isolated server room within a data center

    8. As you plan your multi-tiered, multi-instance AWS application, you need a way to effec-
      tively organize your instances and configure their network connectivity and access control.
      Which tool will let you do that?

      1. Load Balancing

      2. Amazon Virtual Private Cloud (VPC)

      3. Amazon CloudFront

      4. AWS endpoints

    9. You want to be sure that the application you’re building using EC2 and S3 resources will
      be reliable enough to meet the regulatory standards required within your industry. What
      should you check?

      1. Historical uptime log records

      2. The AWS Program Compliance Tool

      3. The AWS service level agreement (SLA)

      4. The AWS Compliance Programs documentation page

      5. The AWS Shared Responsibility Model

    10. Your organization’s operations team members need a way to access and administer your
      AWS infrastructure via your local command line or shell scripts. Which of the following
      tools will let them do that?

      1. AWS Config

      2. AWS CLI

      3. AWS SDK

      4. The AWS Console

        Review Questions 19


    11. While building a large AWS-based application, your company has been facing configuration
      problems they can’t solve on their own. As a result, they need direct access to AWS support
      for both development and IT team leaders. Which support plan should you purchase?

      1. Business

      2. Developer

      3. Basic

      4. Enterprise

54 Chapter 2 Amazon Elastic Compute Cloud and Amazon Elastic Block Store


Review Questions

  1. You need to deploy multiple EC2 Linux instances that will provide your company with
    virtual private networks (VPNs) using software called OpenVPN. Which of the following
    will be the most efficient solutions? (Choose two.)

    1. Select a regular Linux AMI and bootstrap it using user data that will install and con-
      figure the OpenVPN package on the instance and use it for your VPN instances.

    2. Search the community AMIs for an official AMI provided and supported by the
      OpenVPN company.

    3. Search the AWS Marketplace to see whether there’s an official AMI provided and sup-
      ported by the OpenVPN company.

    4. Select a regular Linux AMI and SSH to manually install and configure the OpenVPN
      package.

    5. Create a Site-to-Site VPN Connection from the wizard in the AWS VPC dashboard.

  2. As part of your company’s long-term cloud migration strategy, you have a VMware virtual
    machine in your local infrastructure that you’d like to copy to your AWS account and run
    as an EC2 instance. Which of the following will be necessary steps? (Choose two.)

    1. Import the virtual machine to your AWS region using a secure SSH tunnel.

    2. Import the virtual machine using VM Import/Export.

    3. Select the imported VM from among your private AMIs and launch an instance.

    4. Select the imported VM from the AWS Marketplace AMIs and launch an instance.

    5. Use the AWS CLI to securely copy your virtual machine image to an S3 bucket within
      the AWS region you’ll be using.

  3. Your AWS CLI command to launch an AMI as an EC2 instance has failed, giving you an
    error message that includes
    InvalidAMIID.NotFound. What of the following is the most
    likely cause?

    1. You haven’t properly configured the ~/.aws/config file.

    2. The AMI is being updated and is temporarily unavailable.

    3. Your key pair file has been given the wrong (overly permissive) permissions.

    4. The AMI you specified exists in a different region than the one you’ve currently speci-
      fied.

  4. The sensitivity of the data your company works with means that the instances you run must
    be secured through
    complete physical isolation. What should you specify as you configure a
    new instance?

    1. Dedicated Host tenancy

    2. Shared tenancy

    3. Dedicated Instance tenancy

    4. Isolated tenancy

      Review Questions 55


  5. Normally, two instances running m5.large instance types can handle the traffic accessing
    your online e-commerce site, but you know that you will face short, unpredictable periods
    of high demand. Which of the following choices should you implement? (Choose two.)

    1. Configure autoscaling.

    2. Configure load balancing.

    3. Purchase two m5.large instances on the spot market and as many on-demand instances
      as necessary.

    4. Shut down your m5.large instances and purchase instances using a more robust in-
      stance type to replace them.

    5. Purchase two m5.large reserve instances and as many on-demand instances as
      necessary.

  6. Which of the following use cases would be most cost effective if run using spot market
    instances?

    1. Your e-commerce website is built using a publicly available AMI.

    2. You provide high-end video rendering services using a fault-tolerant process that can
      easily manage a job that was unexpectedly interrupted.

    3. You’re running a backend database that must be reliably updated to keep track of criti-
      cal transactions.

    4. Your deployment runs as a static website on S3.

  7. In the course of a routine infrastructure audit, your organization discovers that some of
    your running EC2 instances are not configured properly and must be updated. Which of
    the following configuration details
    cannot be changed on an existing EC2 instance?

    1. AMI

    2. Instance type

    3. Security group

    4. Public IP address

  8. For an account with multiple resources running as part of multiple projects, which of the
    following key/value combination examples would make for the most effective identification
    convention for resource tags?

    1. servers:server1

    2. project1:server1

    3. EC2:project1:server1

    4. server1:project1

  9. Which of the following EBS options will you need to keep your data-hungry application
    that requires up to 20,000 IOPS happy?

    1. Cold HDD

    2. General-purpose SSD

    3. Throughput-optimized HDD

    4. Provisioned-IOPS SSD

      56 Chapter 2 Amazon Elastic Compute Cloud and Amazon Elastic Block Store


  10. Your organization needs to introduce Auto Scaling to its infrastructure and needs to gen-
    erate a “golden image” AMI from an existing EBS volume. This image will need to be
    shared among multiple AWS accounts belonging to your organization. Which of the follow-
    ing steps will get you there? (Choose three.)

    1. Create an image from a detached EBS volume, use it to create a snapshot, select your
      new AMI from your private collection, and use it for your launch configuration.

    2. Create a snapshot of the EBS root volume you need, use it to create an image, select
      your new AMI from your private collection, and use it for your launch configuration.

    3. Create an image from the EBS volume attached to the instance, select your new AMI
      from your private collection, and use it for your launch configuration.

    4. Search the AWS Marketplace for the appropriate image and use it for your launch con-
      figuration.

    5. Import the snapshot of an EBS root volume from a different AWS account, use it to
      create an image, select your new AMI from your private collection, and use it for your
      launch configuration.

  11. Which of the following are benefits of instance store volumes? (Choose two.)

    1. Instance volumes are physically attached to the server that’s hosting your instance,
      allowing faster data access.

    2. Instance volumes can be used to store data even after the instance is shut down.

    3. The use of instance volumes does not incur costs (beyond those for the instance itself).

    4. You can set termination protection so that an instance volume can’t be accidentally
      shut down.

    5. Instance volumes are commonly used as a base for the creation of AMIs.

  12. According to default behavior (and AWS recommendations), which of the following IP
    addresses could be assigned as the private IP for an EC2 instance? (Choose two.)

    A. 54.61.211.98

    B. 23.176.92.3

    C. 172.17.23.43

    D. 10.0.32.176

    E. 192.140.2.118

  13. You need to restrict access to your EC2 instance-based application to only certain clients
    and only certain targets. Which three attributes of an incoming data packet are used by a
    security group to determine whether it should be allowed through? (Choose three.)

    1. Network port

    2. Source address

    3. Datagram header size

    4. Network protocol

    5. Destination address

      Review Questions 57


  14. How are IAM roles commonly used to ensure secure resource access in relation to EC2
    instances? (Choose two.)

    1. A role can assign processes running on the EC2 instance itself permission to access
      other AWS resources.

    2. A user can be given permission to authenticate as a role and access all associated
      resources.

    3. A role can be associated with individual instance-based processes (Linux instances
      only), giving them permission to access other AWS resources.

    4. A role can give users and resources permission to access the EC2 instance.

  15. You have an instance running within a private subnet that needs external network access
    to receive software updates and patches. Which of the following can securely provide that
    access from a public subnet within the same VPC? (Choose two.)

    1. Internet gateway

    2. NAT instance

    3. Virtual private gateway

    4. NAT gateway

    5. VPN

  16. What do you have to do to securely authenticate to the GUI console of a Windows
    EC2 session?

    1. Use the private key of your key pair to initiate an SSH tunnel session.

    2. Use the public key of your key pair to initiate an SSH tunnel session.

    3. Use the public key of your key pair to retrieve the password you’ll use to log in.

    4. Use the private key of your key pair to retrieve the password you’ll use to log in.

  17. Your application deployment includes multiple EC2 instances that need low-latency con-
    nections to each other. Which of the following AWS tools will allow you to locate EC2
    instances closer to each other to reduce network latency?

    1. Load balancing

    2. Placement groups

    3. AWS Systems Manager

    4. AWS Fargate

  18. To save configuration time and money, you want your application to run only when net-
    work events trigger it but shut down immediately after. Which of the following will do
    that for you?

    1. AWS Lambda

    2. AWS Elastic Beanstalk

    3. Amazon Elastic Container Service (ECS)

    4. Auto Scaling

      58 Chapter 2 Amazon Elastic Compute Cloud and Amazon Elastic Block Store


  19. Which of the following will allow you to quickly copy a virtual machine image from your
    local infrastructure to your AWS VPC?

    1. AWS Simple Storage Service (S3)

    2. AWS Snowball

    3. VM Import/Export

    4. AWS Direct Connect

  20. You’ve configured an EC2 Auto Scaling group to use a launch configuration to provi-
    sion and install an application on several instances. You now need to reconfigure Auto
    Scaling to install an additional application on new instances. Which of the following
    should you do?

    1. Modify the launch configuration.

    2. Create a launch template and configure the Auto Scaling group to use it.

    3. Modify the launch template.

    4. Modify the CloudFormation template.

  21. You create an Auto Scaling group with a minimum group size of 3, a maximum group size
    of 10, and a desired capacity of 5. You then manually terminate two instances in the group.
    Which of the following will Auto Scaling do?

    1. Create two new instances

    2. Reduce the desired capacity to 3

    3. Nothing

    4. Increment the minimum group size to 5

  22. You’re running an application that receives a spike in traffic on the first day of every
    month. You want to configure Auto Scaling to add more instances before the spike begins
    and then add additional instances in proportion to the CPU utilization of each instance.
    Which of the following should you implement? (Choose all that apply.)

    1. Target tracking policies

    2. Scheduled actions

    3. Step scaling policies

    4. Simple scaling policies

    5. Load balancing

  23. As part of your new data backup protocols, you need to manually take EBS snapshots
    of several hundred volumes. Which type of Systems Manager document enables you
    to do this?

    1. Command

    2. Automation

    3. Policy

    4. Manual

78 Chapter 3 AWS Storage


Review Questions

  1. Your organization runs Linux-based EC2 instances that all require low-latency read/write
    access to a single set of files. Which of the following AWS services are your best choices?
    (Choose two.)

    1. AWS Storage Gateway

    2. AWS S3

    3. Amazon Elastic File System

    4. AWS Elastic Block Store

  2. Your organization expects to be storing and processing large volumes of data in many small
    increments. When considering S3 usability, you’ll need to know whether you’ll face any
    practical limitations in the use of AWS account resources. Which of the following will nor-
    mally be available only in limited amounts?

    1. PUT requests/month against an S3 bucket

    2. The volume of data space available per S3 bucket

    3. Account-wide S3 storage space

    4. The number of S3 buckets within a single account

  3. You have a publicly available file called filename stored in an S3 bucket named bucketname.
    Which of the following addresses will successfully retrieve the file using a web browser?

    1. s3.amazonaws.com/bucketname/filename

    2. filename/bucketname.s3.amazonaws.com

    3. s3://bucketname/filename

    4. s3://filename/bucketname

  4. If you want the files stored in an S3 bucket to be accessible using a familiar directory
    hierarchy system, you’ll need to specify prefixes and delimiters. What are prefixes and
    delimiters?

    1. A prefix is the name common to the objects you want to group, and a delimiter is the
      bar character (|).

    2. A prefix is the DNS name that precedes the amazonaws.com domain, and a delimiter
      is the name you want to give your file directory.

    3. A prefix is the name common to the objects you want to group, and a delimiter is a
      forward slash character (/).

    4. A prefix is the name common to the file type you want to identify, and a delimiter is a
      forward slash character (/).

  5. Your web application relies on data objects stored in AWS S3 buckets. Compliance with
    industry regulations requires that those objects are encrypted and that related events can be
    closely tracked. Which combination of tools should you use? (Choose two.)

    1. Server-side encryption

    2. Amazon S3-Managed Keys

      Review Questions 79


    3. AWS KMS-Managed Keys

    4. Client-side encryption

    5. AWS End-to-End managed keys

  6. You are engaged in a deep audit of the use of your AWS resources and you need to better
    understand the structure and content of your S3 server access logs. Which of the following
    operational details are likely to be included in S3 server access logs? (Choose three.)

    1. Source bucket name

    2. Action requested

    3. Current bucket size

    4. API bucket creation calls

    5. Response status

  7. You’re assessing the level of durability you’ll need to sufficiently ensure the long-term via-
    bility of a new web application you’re planning. Which of the following risks are covered by
    S3’s data durability guaranties? (Choose two.)

    1. User misconfiguration

    2. Account security breach

    3. Infrastructure failure

    4. Temporary service outages

    5. Data center security breach

  8. Which of the following explains the difference in durability between S3’s One Zone-IA and
    Reduced Redundancy classes?

    1. One Zone-IA data is heavily replicated but only within a single availability zone,
      whereas Reduced Redundancy data is only lightly replicated.

    2. Reduced Redundancy data is heavily replicated but only within a single availability
      zone, whereas One Zone-IA data is only lightly replicated.

    3. One Zone-IA data is replicated across AWS regions, whereas Reduced Redundancy
      data is restricted to a single region.

    4. One Zone-IA data is automatically backed up to Amazon Glacier, whereas Reduced
      Redundancy data remains within S3.

  9. Which of the following is the 12-month availability guarantee for the S3 Standard-IA class?

    1. 99.99 percent

    2. 99.9 percent

      C. 99.999999999 percent

      D. 99.5 percent

  10. Your application regularly writes data to an S3 bucket, but you’re worried about the poten-
    tial for data corruption as a result of conflicting concurrent operations. Which of the fol-
    lowing data operations would
    not be subject to concerns about eventual consistency?

    1. Operations immediately preceding the deletion of an existing object

    2. Operations subsequent to the updating of an existing object

      80 Chapter 3 AWS Storage


    3. Operations subsequent to the deletion of an existing object

    4. Operations subsequent to the creation of a new object

  11. You’re worried that updates to the important data you store in S3 might incorrectly over-
    write existing files. What must you do to protect objects in S3 buckets from being acciden-
    tally lost?

    1. Nothing. S3 protects existing files by default.

    2. Nothing. S3 saves older versions of your files by default.

    3. Enable versioning.

    4. Enable file overwrite protection.

  12. Your S3 buckets contain many thousands of objects. Some of them could be moved to less
    expensive storage classes and others still require instant availability. How can you apply
    transitions between storage classes for only certain objects within an S3 bucket?

    1. By specifying particular prefixes when you define your lifecycle rules

    2. This isn’t possible. Lifecycle rules must apply to all the objects in a bucket.

    3. By specifying particular prefixes when you create the bucket

    4. By importing a predefined lifecycle rule template

  13. Which of the following classes will usually make the most sense for long-term storage when
    included within a sequence of lifecycle rules?

    1. Glacier

    2. Reduced Redundancy

    3. S3 One Zone-IA

    4. S3 Standard-IA

  14. Which of the following are the recommended methods for providing secure and controlled
    access to your buckets? (Choose two.)

    1. S3 access control lists (ACLs)

    2. S3 bucket policies

    3. IAM policies

    4. Security groups

    5. AWS Key Management Service

  15. In the context of an S3 bucket policy, which of the following statements describes a
    principal?

    1. The AWS service being defined (S3 in this case)

    2. An origin resource that’s given permission to alter an S3 bucket

    3. The resource whose access is being defined

    4. The user or entity to which access is assigned

      Review Questions 81


  16. You don’t want to open up the contents of an S3 bucket to anyone on the Internet, but you
    need to share the data with specific clients. Generating and then sending them a presigned
    URL is a perfect solution. Assuming you didn’t explicitly set a value, how long will the pre-
    signed URL remain valid?

    1. 24 hours

    2. 3,600 seconds

    3. 5 minutes

    4. 360 seconds

  17. Which non-S3 AWS resources can improve the security and user experience of your
    S3-hosted static website? (Choose two.)

    1. AWS Certificate Manager

    2. Elastic Compute Cloud (EC2)

    3. Relational Database Service (RDS)

    4. Route 53

    5. AWS Key Management Service

  18. What is the largest single archive supported by Amazon Glacier?

    1. 5 GB

    2. 40 TB

    3. 5 TB

    4. 40 GB

  19. You need a quick way to transfer very large (peta-scale) data archives to the cloud.
    Assuming your Internet connection isn’t up to the task, which of the following will be both
    (relatively) fast and cost-effective?

    1. Direct Connect

    2. Server Migration Service

    3. Snowball

    4. Storage Gateway

  20. Your organization runs Windows-based EC2 instances that all require low-latency read/
    write access to a single set of files. Which of the following AWS services is your best choice?

    1. Amazon FSx for Windows File Server

    2. Amazon FSx for Lustre

    3. Amazon Elastic File System

    4. Amazon Elastic Block Store

Review Questions 129


Review Questions

  1. What is the range of allowed IPv4 prefix lengths for a VPC CIDR block?

    A. /16 to /28

    B. /16 to /56

    C. /8 to /30

    D. /56 only

  2. You’ve created a VPC with the CIDR 192.168.16.0/24. You want to assign a secondary
    CIDR to this VPC. Which CIDR can you use?

    A. 172.31.0.0/16

    B. 192.168.0.0/16

    C. 192.168.0.0/24

    D. 192.168.16.0/23

  3. You need to create two subnets in a VPC that has a CIDR of 10.0.0.0/16. Which of the fol-
    lowing CIDRs can you assign to one of the subnets while leaving room for an additional
    subnet? (Choose all that apply.)

    A. 10.0.0.0/24

    B. 10.0.0.0/8

    C. 10.0.0.0/16

    D. 10.0.0.0/23

  4. What is the relationship between a subnet and an availability zone?

    1. A subnet can exist in multiple availability zones.

    2. An availability zone can have multiple subnets.

    3. An availability zone can have only one subnet.

    4. A subnet’s CIDR is derived from its availability zone.

  5. Which is true regarding an elastic network interface?

    1. It must have a private IP address from the subnet that it resides in.

    2. It cannot exist independently of an instance.

    3. It can be connected to multiple subnets.

    4. It can have multiple IP addresses from different subnets.

  6. Which of the following statements is true of security groups?

    1. Only one security group can be attached to an ENI.

    2. A security group must always be attached to an ENI.

    3. A security group can be attached to a subnet.

    4. Every VPC contains a default security group.

      130 Chapter 4 Amazon Virtual Private Cloud


  7. How does an NACL differ from a security group?

    1. An NACL is stateless.

    2. An NACL is stateful.

    3. An NACL is attached to an ENI.

    4. An NACL can be associated with only one subnet.

  8. What is an Internet gateway?

    1. A resource that grants instances in multiple VPCs’ Internet access

    2. An implied router

    3. A physical router

    4. A VPC resource with no management IP address

  9. What is the destination for a default IPv4 route?

    A. 0.0.0.0/0

    B. ::0/0

    1. An Internet gateway

    2. The IP address of the implied router

  10. You create a new route table in a VPC but perform no other configuration on it. You then
    create a new subnet in the same VPC. Which route table will your new subnet be asso-
    ciated with?

    1. The main route table

    2. The route table you created

    3. The default route table

    4. None of these

  11. You create a Linux instance and have AWS automatically assign a private IP address but not
    a public IP address. What will happen when you stop and restart the instance?

    1. You won’t be able to establish an SSH session directly to the instance from the Internet.

    2. The instance won’t be able to access the Internet.

    3. The instance will receive the same private IP address.

    4. The instance will be unable to reach other instances in its subnet.

  12. How can you assign a public IP address to a running instance that doesn’t have one?

    1. Allocate an ENI and associate it with the instance’s primary EIP.

    2. Allocate an EIP and associate it with the instance’s primary ENI.

    3. Configure the instance to use an automatically assigned public IP.

    4. Allocate an EIP and change the private IP address of the instance’s ENI to match.

      Review Questions 131


  13. When an instance with an automatically assigned public IP sends a packet to another
    instance’s EIP, what source address does the destination instance see?

    1. The public IP

    2. The EIP

    3. The private IP

      D. 0.0.0.0

  14. Why must a NAT device reside in a different subnet than an instance that uses it?

    1. Both must use different default gateways.

    2. Both must use different NACLs.

    3. Both must use different security groups.

    4. The NAT device requires a public interface and a private interface.

  15. Which of the following is a difference between a NAT instance and NAT gateway?

    1. There are different NAT gateway types.

    2. A NAT instance scales automatically.

    3. A NAT gateway can span multiple availability zones.

    4. A NAT gateway scales automatically.

  16. Which VPC resource performs network address translation?

    1. Internet gateway

    2. Route table

    3. EIP

    4. ENI

  17. What must you do to configure a NAT instance after creating it?

    1. Disable the source/destination check on its ENI.

    2. Enable the source/destination check on its ENI.

    3. Create a default route in its route table with a NAT gateway as the target.

    4. Assign a primary private IP address to the instance.

  18. Which of the following is true regarding VPC peering?

    1. Transitive routing is not supported.

    2. A VPC peering connection requires a public IP address.

    3. You can peer up to three VPCs using a single peering connection.

    4. You can use a peering connection to share an Internet gateway among multiple VPCs.

      132 Chapter 4 Amazon Virtual Private Cloud


  19. You’ve created one VPC peering connection between two VPCs. What must you do to
    use this connection for bidirectional instance-to-instance communication? (Choose all
    that apply.)

    1. Create two routes with the peering connection as the target.

    2. Create only one default route with the peering connection as the target.

    3. Create another peering connection between the VPCs.

    4. Configure the instances’ security groups correctly.

  20. Which of the following is a not a limitation of interregion VPC peering?

    1. It’s not supported in some regions.

    2. The maximum MTU is 1,500 bytes.

    3. You can’t use IPv4.

    4. You can’t use IPv6.

  21. Which over which of the following connection types is always encrypted?

    1. Direct Connect

    2. VPN

    3. VPC peering

    4. Transit gateway

  22. Which of the following allows EC2 instances in different regions to communicate using
    private IP addresses? (Choose three.)

    1. VPN

    2. Direct Connect

    3. VPC peering

    4. Transit gateway

  23. Which of the following is true of a route in a transit gateway route table?

    1. It can be multicast.

    2. It can be a blackhole route.

    3. It can have an Internet gateway as a target.

    4. It can have an ENI as a target.

  24. Which of the following is an example of a tightly coupled HPC workload?

    1. Image processing

    2. Audio processing

    3. DNA sequencing

    4. Hurricane track forecasting

    5. Video processing

Review Questions 161


Review Questions

  1. In a relational database, a row may also be called what? (Choose two.)

    1. Record

    2. Attribute

    3. Tuple

    4. Table

  2. What must every relational database table contain?

    1. A foreign key

    2. A primary key

    3. An attribute

    4. A row

  3. Which SQL statement would you use to retrieve data from a relational database table?

    1. QUERY

    2. SCAN

    3. INSERT

    4. SELECT

  4. Which relational database type is optimized to handle multiple transactions per second?

    1. Offline transaction processing (OLTP)

    2. Online transaction processing (OLTP)

    3. Online analytic processing (OLAP)

    4. key/value store

  5. How many database engines can an RDS database instance run?

    1. Six

    2. One

    3. Two

    4. Four

  6. Which database engines are compatible with existing MySQL databases? (Choose all
    that apply.)

    1. Microsoft SQL Server

    2. MariaDB

    3. Aurora

    4. PostgreSQL

      162 Chapter 5 Database Services


  7. Which storage engine should you use with MySQL, Aurora, and MariaDB for maximum
    compatibility with RDS?

    1. MyISAM

    2. XtraDB

    3. InnoDB

    4. PostgreSQL

  8. Which database engine supports the bring-your-own-license (BYOL) model? (Choose all
    that apply.)

    1. Oracle Standard Edition Two

    2. Microsoft SQL Server

    3. Oracle Standard Edition One

    4. PostgreSQL

  9. Which database instance class provides dedicated bandwidth for storage volumes?

    1. Standard

    2. Memory optimized

    3. Storage optimized

    4. Burstable performance

  10. If a MariaDB database running in RDS needs to write 200 MB of data every second, how
    many IOPS should you provision using io1 storage to sustain this performance?

    A. 12,800

    B. 25,600

    C. 200

    D. 16

  11. Using general-purpose SSD storage, how much storage would you need to allocate to
    get 600 IOPS?

    1. 200 GB

    2. 100 GB

    3. 200 TB

    4. 200 MB

  12. If you need to achieve 12,000 IOPS using provisioned IOPS SSD storage, how much storage
    should you allocate, assuming that you need only 100 GB of storage?

    1. There is no minimum storage requirement.

    2. 200 GB

    3. 240 GB

    4. 12 TB

      Review Questions 163


  13. What type of database instance only accepts queries?

    1. Read replica

    2. Standby database instance

    3. Primary database instance

    4. Master database instance

  14. In a multi-AZ deployment using Oracle, how is data replicated?

    1. Synchronously from the primary instance to a read replica

    2. Synchronously using a cluster volume

    3. Asynchronously from the primary to a standby instance

    4. Synchronously from the primary to a standby instance

  15. Which of the following occurs when you restore a failed database instance from a snapshot?

    1. RDS restores the snapshot to a new instance.

    2. RDS restores the snapshot to the failed instance.

    3. RDS restores only the individual databases to a new instance.

    4. RDS deletes the snapshot.

  16. Which Redshift distribution style stores all tables on all compute nodes?

    1. EVEN

    2. ALL

    3. KEY

    4. ODD

  17. Which Redshift node type can store up to 326 TB of data?

    1. Dense memory

    2. Leader

    3. Dense storage

    4. Dense compute

  18. Which is true regarding a primary key in a nonrelational database? (Choose all that apply.)

    1. It’s required to uniquely identify an item.

    2. It must be unique within the table.

    3. It’s used to correlate data across different tables.

    4. Its data type can vary within a table.

  19. In a DynamoDB table containing orders, which key would be most appropriate for storing
    an order date?

    1. Partition key

    2. Sort key

      164 Chapter 5 Database Services


    3. Hash key

    4. Simple primary key

  20. When creating a DynamoDB table, how many read capacity units should you provision to
    be able to sustain strongly consistent reads of 11 KB per second?

    1. 3

    2. 2

    3. 1

    4. 0

  21. Which Redshift node type can provide the fastest read access?

    1. Dense compute

    2. Dense storage

    3. Leader

    4. KEY

  22. Which DynamoDB index type allows the partition and hash key to differ from the
    base table?

    1. Eventually consistent index

    2. Local secondary index

    3. Global primary index

    4. Global secondary index

  23. To ensure the best performance, in which of the following situations would you choose to
    store data in a NoSQL database instead of a relational database?

    1. You need to perform a variety of complex queries against the data.

    2. You need to query data based on only one attribute.

    3. You need to store JSON documents.

    4. The data will be used by different applications.

  24. What type of database can discover how different items are related to each other?

    1. SQL

    2. Relational

    3. Document-oriented store

    4. Graph

Review Questions 179


Review Questions

  1. Which of the following is the greatest risk posed by using your AWS account root user for
    day-to-day operations?

    1. There would be no easy way to control resource usage by project or class.

    2. There would be no effective limits on the effect of an action, making it more likely for
      unintended and unwanted consequences to result.

    3. Since root has full permissions over your account resources, an account compromise at
      the hands of hackers would be catastrophic.

    4. It would make it difficult to track which account user is responsible for specific actions.

  2. You’re trying to create a custom IAM policy to more closely manage access to components
    in your application stack. Which of the following syntax-related statements is a correct
    description of IAM policies?

    1. The Action element refers to the way IAM will react to a request.

    2. The * character applies an element globally—as broadly as possible.

    3. The Resource element refers to the third-party identities that will be allowed to access
      the account.

    4. The Effect element refers to the anticipated resource state after a request is granted.

  3. Which of the following will—when executed on its own—prevent an IAM user with no
    existing policies from launching an EC2 instance? (Choose three.)

    1. Attach no policies to the user.

    2. Attach two policies to the user, with one policy permitting full EC2 access and the
      other permitting IAM password changes but denying EC2 access.

    3. Attach a single policy permitting the user to create S3 buckets.

    4. Attach the AdministratorAccess policy.

    5. Associate an IAM action statement blocking all EC2 access to the user’s account.

  4. Which of the following are important steps for securing IAM user accounts? (Choose two.)

    1. Never use the account to perform any administration operations.

    2. Enable multifactor authentication (MFA).

    3. Assign a long and complex password.

    4. Delete all access keys.

    5. Insist that your users access AWS resources exclusively through the AWS CLI.

  5. To reduce your exposure to possible attacks, you’re auditing the active access keys associ-
    ated with your account. Which of the following AWS CLI commands can tell you whether a
    specified access key is still being used?

    1. aws iam get-access-key-used –access-key-id <key_ID>

    2. aws iam --get-access-key-last-used access-key-id <key_ID>

      180 Chapter6 Authentication andAuthorization—AWSIdentityandAccessManagement


    3. aws iam get-access-key-last-used access-last-key-id <key_ID>

    4. aws iam get-access-key-last-used --access-key-id <key_ID>

  6. You’re looking to reduce the complexity and tedium of AWS account administration. Which
    of the following is the greatest benefit of organizing your users into groups?

    1. It enhances security by consolidating resources.

    2. It simplifies the management of user permissions.

    3. It allows for quicker response times to service interruptions.

    4. It simplifies locking down the root user.

  7. During an audit of your authentication processes, you enumerate a number of identity types
    and want to know which of them might fit the category of “trusted identity” and require
    deeper investigation. Which of these is
    not considered a trusted entity in the context of
    IAM roles?

    1. A web identity authenticating with Google

    2. An identity coming through a SAML-based federated provider

    3. An identity using an X.509 certificate

    4. A web identity authenticating with Amazon Cognito

  8. Your company is bidding for a contract with a U.S. government agency that demands any
    cryptography modules used on the project be compliant with government standards. Which
    of the following AWS services provides virtual hardware devices for managing encryption
    infrastructure that’s FIPS 140-2 compliant?

    1. AWS CloudHSM

    2. AWS Key Management Service

    3. AWS Security Token Service

    4. AWS Secrets Manager

  9. Which of the following is the best tool for authenticating access to a VPC-based Microsoft
    SharePoint farm?

    1. Amazon Cognito

    2. AWS Directory Service for Microsoft Active Directory

    3. AWS Secrets Manager

    4. AWS Key Management Service

  10. What is the function of Amazon Cognito identity pools?

    1. Gives your application users temporary, controlled access to other services in your
      AWS account

    2. Adds user sign-up and sign-in to your applications

    3. Incorporates encryption infrastructure into your application lifecycle

    4. Delivers up-to-date credentials to authenticate RDS database requests

      Review Questions 181


  11. An employee with access to the root user on your AWS account has just left your company.
    Since you can’t be 100 percent sure that the former employee won’t try to harm your
    company, which of the following steps should you take? (Choose three.)

    1. Change the password and MFA settings for the root account.

    2. Delete and re-create all existing IAM policies.

    3. Change the passwords for all your IAM users.

    4. Delete the former employee’s own IAM user (within the company account).

    5. Immediately rotate all account access keys.

  12. You need to create a custom IAM policy to give one of your developers limited access to
    your DynamoDB resources. Which of the following elements will not play any role in
    crafting an IAM policy?

    1. Action

    2. Region

    3. Effect

    4. Resource

  13. Which of the following are necessary steps for creating an IAM role? (Choose two.)

    1. Define the action.

    2. Select at least one policy.

    3. Define a trusted entity.

    4. Define the consumer application.

  14. Which of the following uses authentication based on AWS Security Token Service
    (STS) tokens?

    1. Policies

    2. Users

    3. Groups

    4. Roles

  15. What format must be used to write an IAM policy?

    1. HTML

    2. Key/value pairs

    3. JSON

    4. XML

  16. If you need to allow a user full control over EC2 instance resources, which two of the fol-
    lowing must be included in the policy you create?

    1. "Target": "ec2:*"

    2. "Action": "ec2:*"

    3. "Resource": "ec2:*"

      182 Chapter6 Authentication andAuthorization—AWSIdentityandAccessManagement


    4. "Effect": "Allow"

    5. "Effect": "Permit"

  17. What is the function of Amazon Cognito user pools?

    1. Gives your application users temporary, controlled access to other services in your
      AWS account

    2. Adds user sign-up and sign-in to your applications

    3. Incorporates encryption infrastructure into your application lifecycle

    4. Delivers up-to-date credentials to authenticate RDS database requests

  18. Which of the following best describe the “managed” part of AWS Managed Microsoft AD?
    (Choose two.)

    1. Integration with on-premises AD domains is possible.

    2. AD domain controllers are launched in two availability zones.

    3. Data is automatically replicated.

    4. Underlying AD software is automatically updated.

  19. Which of the following steps are part of the access key rotation process? (Choose three.)

    1. Monitor the use of your new keys.

    2. Monitor the use of old keys.

    3. Deactivate the old keys.

    4. Delete the old keys.

    5. Confirm the status of your X.509 certificate.

  20. What tool will allow an Elastic Container Service task to access container images it might
    need that are being maintained in your account’s Elastic Container Registry?

    1. An IAM role

    2. An IAM policy

    3. An IAM group

    4. An AIM access key

Review Questions 207


Review Questions

  1. You’ve configured CloudTrail to log all management events in all regions. Which of the fol-
    lowing API events will CloudTrail log? (Choose all that apply.)

    1. Logging into the AWS console

    2. Creating an S3 bucket from the web console

    3. Uploading an object to an S3 bucket

    4. Creating a subnet using the AWS CLI

  2. You’ve configured CloudTrail to log all read-only data events. Which of the following
    events will CloudTrail log?

    1. Viewing all S3 buckets

    2. Uploading a file to an S3 bucket

    3. Downloading a file from an S3 bucket

    4. Creating a Lambda function

  3. Sixty days ago, you created a trail in CloudTrail to log read-only management events. Sub-
    sequently someone deleted the trail. Where can you look to find out who deleted it? No
    other trails are configured.

    1. The IAM user log

    2. The trail logs stored in S3

    3. The CloudTrail event history in the region where the trail was configured

    4. The CloudTrail event history in any region

  4. What uniquely distinguishes two CloudWatch metrics that have the same name and are in
    the same namespace?

    1. The region

    2. The dimension

    3. The timestamp

    4. The data point

  5. Which type of monitoring sends metrics to CloudWatch every five minutes?

    1. Regular

    2. Detailed

    3. Basic

    4. High resolution

      208 Chapter 7 CloudTrail, CloudWatch, and AWS Config


  6. You update a custom CloudWatch metric with the timestamp of 15:57:08 and a value
    of 3. You then update the same metric with the timestamp of 15:57:37 and a value

    of 6. Assuming the metric is a high-resolution metric, which of the following will
    CloudWatch do?

    1. Record both values with the given timestamp.

    2. Record the second value with the timestamp 15:57:37, overwriting the first value.

    3. Record only the first value with the timestamp 15:57:08, ignoring the second value.

    4. Record only the second value with the timestamp 15:57:00, overwriting the first value.

  7. How long does CloudWatch retain metrics stored at one-hour resolution?

    1. 15 days

    2. 3 hours

    3. 63 days

    4. 15 months

  8. You want to use CloudWatch to graph the exact data points of a metric for the last hour.
    The metric is stored at five-minute resolution. Which statistic and period should you use?

    1. The Sum statistic with a five-minute period

    2. The Average statistic with a one-hour period

    3. The Sum statistic with a one-hour period

    4. The Sample count statistic with a five-minute period

  9. Which CloudWatch resource type stores log events?

    1. Log group

    2. Log stream

    3. Metric filter

    4. CloudWatch Agent

  10. The CloudWatch Agent on an instance has been sending application logs to a CloudWatch
    log stream for several months. How can you remove old log events without disrupting
    delivery of new log events? (Choose all that apply.)

    1. Delete the log stream.

    2. Manually delete old log events.

    3. Set the retention of the log stream to 30 days.

    4. Set the retention of the log group to 30 days.

  11. You created a trail to log all management events in all regions and send the trail logs to
    CloudWatch logs. You notice that some recent management events are missing from the
    log stream, but others are there. What are some possible reasons for this? (Choose all
    that apply.)

    1. The missing events are greater than 256 KB in size.

    2. The metric filter is misconfigured.

      Review Questions 209


    3. There’s a delay between the time the event occurs and the time CloudTrail streams the
      event to CloudWatch.

    4. The IAM role that CloudTrail assumes is misconfigured.

  12. Two days ago, you created a CloudWatch alarm to monitor the VolumeReadOps on an
    EBS volume. Since then, the alarm has remained in an
    INSUFFICIENT_DATA state. What
    are some possible reasons for this? (Choose all that apply.)

    1. The data points to monitor haven’t crossed the specified threshold.

    2. The EBS volume isn’t attached to a running instance.

    3. The evaluation period hasn’t elapsed.

    4. The alarm hasn’t collected enough data points to alarm.

  13. You want a CloudWatch alarm to change state when four consecutive evaluation periods
    elapse with no data. How should you configure the alarm to treat missing data?

    1. As Missing

    2. Breaching

    3. Not Breaching

    4. Ignore

    5. As Not Missing

  14. You’ve configured an alarm to monitor a metric in the AWS/EC2 namespace. You want
    CloudWatch to send you a text message and reboot an instance when an alarm is breach-
    ing. Which two actions should you configure in the alarm? (Choose two.)

    1. SMS action

    2. Auto Scaling action

    3. Notification action

    4. EC2 action

  15. In a CloudWatch alarm, what does the EC2 recover action do to the monitored instance?

    1. Migrates the instance to a different host

    2. Reboots the instance

    3. Deletes the instance and creates a new one

    4. Restores the instance from a snapshot

  16. You learn that an instance in the us-west-1 region was deleted at some point in the past. To
    find out who deleted the instance and when, which of the following must be true?

    1. The AWS Config configuration recorder must have been turned on in the region at the
      time the instance was deleted.

    2. CloudTrail must have been logging write-only management events for all regions.

    3. CloudTrail must have been logging IAM events.

    4. The CloudWatch log stream containing the deletion event must not have been deleted.

      210 Chapter 7 CloudTrail, CloudWatch, and AWS Config


  17. Which of the following may be included in an AWS Config delivery channel? (Choose all
    that apply.)

    1. A CloudWatch log stream

    2. The delivery frequency of the configuration snapshot

    3. An S3 bucket name

    4. An SNS topic ARN

  18. You configured AWS Config to monitor all your resources in the us-east-1 region. After
    making several changes to the AWS resources in this region, you decided you want to delete
    the old configuration items. How can you accomplish this?

    1. Pause the configuration recorder.

    2. Delete the configuration recorder.

    3. Delete the configuration snapshots.

    4. Set the retention period to 30 days and wait for the configuration items to age out.

  19. Which of the following metric math expressions can CloudWatch graph? (Choose all
    that apply.)

    1. AVG(m1)-m1

    2. AVG(m1)

    3. METRICS()/AVG(m1)

    4. m1/m2

  20. You’ve configured an AWS Config rule to check whether CloudTrail is enabled. What could
    prevent AWS Config from evaluating this rule?

    1. Turning off the configuration recorder

    2. Deleting the rule

    3. Deleting the configuration history for CloudTrail

    4. Failing to specify a frequency for periodic checks

  21. Which of the following would you use to execute a Lambda function whenever an EC2 in-
    stance is launched?

    1. CloudWatch Alarms

    2. EventBridge

    3. CloudTrail

    4. CloudWatch Metrics

228 Chapter 8 The Domain Name System and Network Routing


Review Questions

  1. Which of the following describes the function of a name server?

    1. Translating human-readable domain names into IP addresses

    2. Registering domain names with ICANN

    3. Registering domain names with VeriSign

    4. Applying routing policies to network packets

  2. Your organization is planning a new website and you’re putting together all the
    pieces of information you’ll need to complete the project. Which of the following
    describes a domain?

    1. An object’s FQDN

    2. Policies controlling the way remote requests are resolved

    3. One or more servers, data repositories, or other digital resources identified by a single
      domain name

    4. A label used to direct network requests to a domain’s resources

  3. You need to decide which kind of website name will best represent its purpose. Part of
    that task will involve choosing a top-level domain (TLD). Which of the following is an
    example of a TLD?

    1. amazon.com/documentation/

    2. aws.

    3. amazon.

    4. .com

  4. Which of the following is the name of a record type— as used—in a zone file?

    1. CNAME (canonical name)

    2. TTL (time to live)

    3. Record type

    4. Record data

  5. Which of the following DNS record types should you use to associate a domain name with
    an IP address?

    1. NS

    2. SOA

    3. A

    4. CNAME

  6. Which of the following are services provided by Amazon Route 53? (Choose three.)

    1. Domain registration

    2. Content delivery network

      Review Questions 229


    3. Health checks

    4. DNS management

    5. Secure and fast direct network connections to an AWS VPC

  7. For regulatory compliance, your application may only provide data to requests coming from
    the United States. Which of the following routing policies can be configured to do this?

    1. Simple

    2. Latency

    3. Geolocation

    4. Multivalue

  8. Your web application is hosted within multiple AWS regions. Which of the following
    routing policies will ensure the fastest possible access for your users?

    1. Latency

    2. Weighted

    3. Geolocation

    4. Failover

  9. You’re testing three versions of a new application, with each version running on its own
    server and the current production version on a fourth server. You want to route 5 percent of
    your total traffic to each of the test servers and route the remaining 85 percent of traffic to
    the production server. Which routing policy will you use?

    1. Failover

    2. Weighted

    3. Latency

    4. Geolocation

  10. You have production infrastructure in one region sitting behind one DNS domain, and for
    disaster recovery purposes, you have parallel infrastructure on standby in a second AWS
    region behind a second domain. Which routing policy will automate the switchover in the
    event of a failure in the production system?

    1. Latency

    2. Weighted

    3. Geolocation

    4. Failover

  11. Which of the following kinds of hosted zones are real options within Route 53?
    (Choose two.)

    1. Public

    2. Regional

    3. VPC

    4. Private

    5. Hybrid

      230 Chapter 8 The Domain Name System and Network Routing


  12. Which of the following actions will you need to perform to transfer a domain from an
    external registrar to Route 53? (Choose two.)

    1. Unlock the domain transfer setting on the external registrar admin page.

    2. Request an authorization code from the external registrar.

    3. Copy the name server addresses from Route 53 to the external registrar admin page.

    4. Create a hosted zone CNAME record set.

  13. Which of the following actions will you need to perform to use Route 53 to manage a
    domain that’s being hosted on an external registrar?

    1. Request an authorization code from the external registrar.

    2. Copy the name server addresses from Route 53 to the external registrar admin page.

    3. Create a hosted zone CNAME record set.

    4. Unlock the domain transfer setting on the external registrar admin page.

  14. Your multiserver application has been generating quality-related complaints from users and
    your logs show some servers are underused and others have been experiencing intermittent
    failures. How do Route 53 health checks test for the health of a resource so that a failover
    policy can direct your users appropriately?

    1. It periodically tries to load the index.php page.

    2. It periodically tries to load the index.html page.

    3. It periodically tries to load a specified web page.

    4. It periodically tries to log into the resource using SSH.

  15. Which of the following most accurately describes the difference between geolocation and
    geoproximity routing policies?

    1. Geoproximity policies specify geographic areas by their relationship either to a
      particular longitude and latitude or to an AWS region, whereas geolocation policies
      use the continent, country, or U.S. state where the request originated to decide what
      resource to send.

    2. Geolocation policies specify geographic areas by their relationship either to a
      particular longitude and latitude or to an AWS region, whereas geoproximity policies
      use the continent, country, or U.S. state where the request originated to decide what
      resource to send.

    3. Geolocation policies will direct traffic to the resource you identify as primary as long
      as health checks confirm that that resource is running properly, whereas geoproximity
      policies allow you to deliver web pages in customer-appropriate languages.

    4. Geolocation policies use a health check configuration routing to make a deployment
      more highly available, whereas geoproximity policies leverage resources running in
      multiple AWS regions to provide service to clients from the instances that will deliver
      the best experience.

      Review Questions 231


  16. Which of the following are challenges that CloudFront is well positioned to address?
    (Choose two.)

    1. A heavily used website providing media downloads for a global audience

    2. An S3 bucket with large media files used by workers on your corporate campus

    3. A file server accessed through a corporate VPN

    4. A popular website with periodically changing content

  17. Which of the following is not a permitted origin for a CloudFront distribution?

    1. Amazon S3 bucket

    2. AWS MediaPackage channel endpoint

    3. API Gateway endpoint

    4. Web server

  18. What’s the best way to control the costs your CloudFront distribution incurs?

    1. Select a price class that maintains copies in only a limited subset of CloudFront’s edge
      locations.

    2. Configure a custom SSL certificate to restrict access to HTTPS requests only.

    3. Disable the use of Alternate Domain Names (CNAMES) for your distribution.

    4. Enable Compress Objects Automatically for your distribution.

  19. Which of the following is not a direct benefit of using a CloudFront distribution?

    1. User requests from an edge location that’s recently received the same request will be
      delivered with lower latency.

    2. CloudFront distributions can be directly mapped to Route 53 hosted zones.

    3. All user requests will be delivered with lower latency.

    4. You can incorporate free encryption certificates into your infrastructure.

  20. Which of the following content types is the best fit for a Real-Time Messaging Protocol
    (RTMP) distribution?

    1. Amazon Elastic Transcoder–based videos

    2. S3-based videos

    3. Streaming videos

    4. A mix of text and media-rich digital content

Review Questions 241


Review Questions

  1. When a consumer grabs a message from an SQS queue, what happens to the message?
    (Select two.)

    1. It is immediately deleted from the queue.

    2. It remains in the queue for 30 seconds and is then deleted.

    3. It remains in the queue for the remaining duration of the retention period.

    4. It becomes invisible to other consumers for the duration of the visibility timeout.

  2. What is the default visibility timeout for an SQS queue?

    1. 0 seconds

    2. 30 seconds

    3. 12 hours

    4. 7 days

  3. What is the default retention period for an SQS queue?

    1. 30 minutes

    2. 1 hour

    3. 1 day

    4. 4 days

    5. 7 days

    6. 14 days

  4. You want to make sure that only specific messages placed in an SQS queue are not avail-
    able for consumption for 10 minutes. Which of the following settings can you use to
    achieve this?

    1. Delay queue

    2. Message timer

    3. Visibility timeout

    4. Long polling

  5. Which of the following SQS queue types can handle over 50,000 in-flight messages?

    1. FIFO

    2. Standard

    3. Delay

    4. Short

  6. What SQS queue type always delivers messages in the order they were received?

    1. FIFO

    2. Standard

      242 Chapter 9 Simple Queue Service and Kinesis


    3. LIFO

    4. FILO

    5. Basic

  7. You have an application that uses long polling to retrieve messages from an SQS queue.
    Occasionally, the application crashes because of duplicate messages. Which of the following
    might resolve the issue?

    1. Configure a per-queue delay.

    2. Use a standard queue.

    3. Use a FIFO queue.

    4. Use short polling.

  8. A producer application places messages in an SQS queue, and consumer applications poll
    the queue every 5 seconds using the default polling method. Occasionally, when a consumer
    polls the queue, SQS reports there are no messages in the queue, even though there are.
    When the consumer subsequently polls the queue, SQS delivers the messages. Which of the
    following may explain the missing messages?

    1. Using long polling

    2. Using short polling

    3. Using a FIFO queue

    4. Using a standard queue

  9. Which of the following situations calls for a dead-letter queue?

    1. A message sits in the queue for too long and gets deleted.

    2. Different consumers receive and process the same message.

    3. Messages are mysteriously disappearing from the queue.

    4. A consumer repeatedly fails to process a particular message.

  10. A message that’s 6 days old is sent to a dead-letter queue. The retention period for the dead-
    letter queue and the source queue is 10 days. What will happen to the message?

    1. It will sit in the dead-letter queue for up to 10 days.

    2. It will be immediately deleted.

    3. It will be deleted after four days.

    4. It will sit in the dead-letter queue for up to 20 days.

  11. You’re developing an application to predict future weather patterns based on RADAR
    images. Which of the following Kinesis services is the best choice to support this
    application?

    1. Kinesis Data Streams

    2. Kinesis Video Streams

    3. Kinesis Data Firehose

    4. Kinesis ML

      Review Questions 243


  12. You’re streaming image data to Kinesis Data Streams and need to retain the data for 30
    days. How can you do this? (Choose two.)

    1. Create a Kinesis Data Firehose delivery stream.

    2. Increase the stream retention period to 14 days.

    3. Specify an S3 bucket as the destination.

    4. Specify CloudWatch Logs as the destination.

  13. Which of the following Kinesis services requires you to specify a destination for the stream?

    1. Kinesis Video Streams

    2. Kinesis Data Streams

    3. Kinesis Data Firehose

    4. Kinesis Data Warehouse

  14. You’re running an on-premises application that frequently writes to a log file. You want
    to stream this log file to a Kinesis Data Stream. How can you accomplish this with the
    least effort?

    1. Use the CloudWatch Logs Agent.

    2. Use the Amazon Kinesis Agent.

    3. Write a script that uses the Kinesis Producer Library.

    4. Move the application to an EC2 instance.

  15. When deciding whether to use SQS or Kinesis Data Streams to ingest data, which of the
    following should you take into account?

    1. The frequency of data

    2. The total amount of data

    3. The number of consumers that need to receive the data

    4. The order of data

  16. You want to send streaming log data into Amazon Redshift. Which of the following ser-
    vices should you use? (Choose two.)

    1. SQS with a standard queue

    2. Kinesis Data Streams

    3. Kinesis Data Firehose

    4. SQS with a FIFO queue

  17. Which of the following is not an appropriate use case for Kinesis?

    1. Stock feeds

    2. Facial recognition

    3. Static website hosting

    4. Videoconferencing

      244 Chapter 9 Simple Queue Service and Kinesis


  18. You need to push 2 MB per second through a Kinesis Data Stream. How many shards do
    you need to configure?

    1. 1

    2. 2

    3. 4

    4. 8

  19. Multiple consumers are receiving a Kinesis Data Stream at a total rate of 3 MB per second.
    You plan to add more consumers and need the stream to support reads of at least 5 MB per
    second. How many shards do you need to add?

    1. 1

    2. 2

    3. 3

    4. 4

  20. Which of the following does Kinesis Data Firehose not support?

    1. Videoconferencing

    2. Transforming video metadata

    3. Converting CSV to JSON

    4. Redshift

Review Questions 269


Review Questions

  1. What’s the minimum level of availability you need to stay under 30 minutes of downtime
    per month?

    1. 99 percent

    2. 99.9 percent

    3. 99.95 percent

    4. 99.999 percent

  2. Your application runs on two EC2 instances in one availability zone. An elastic load bal-
    ancer distributes user traffic evenly across the healthy instances. The application on each
    instance connects to a single RDS database instance. Assuming each EC2 instance has an
    availability of 90 percent and the RDS instance has an availability of 95 percent, what is
    the total application availability?

    1. 94.05 percent

    2. 99 percent

    3. 99.9 percent

    4. 99.95 percent

  3. Your organization is designing a new application to run on AWS. The developers have
    asked you to recommend a database that will perform well in all regions. Which database
    should you recommend for maximum availability?

    1. Multi-AZ RDS using MySQL

    2. DynamoDB

    3. Multi-AZ RDS using Aurora

    4. A self-hosted SQL database

  4. Which of the following can help you increase the availability of a web application? (Choose
    all that apply.)

    1. Store web assets in an S3 bucket instead of on the application instance.

    2. Use instance classes large enough to handle your application’s peak load.

    3. Scale your instances in.

    4. Scale your instances out.

  5. You’ve configured an EC2 Auto Scaling group to use a launch configuration to provi-
    sion and install an application on several instances. You now need to reconfigure Auto
    Scaling to install an additional application on new instances. Which of the following
    should you do?

    1. Modify the launch configuration.

    2. Create a launch template and configure the Auto Scaling group to use it.

      270 Chapter 10 The Reliability Pillar


    3. Modify the launch template.

    4. Modify the CloudFormation template.

  6. You create an Auto Scaling group with a minimum group size of 3, a maximum group size
    of 10, and a desired capacity of 5. You then manually terminate two instances in the group.
    Which of the following will Auto Scaling do?

    1. Create two new instances

    2. Reduce the desired capacity to 3

    3. Nothing

    4. Increment the minimum group size to 5

  7. Which of the following can Auto Scaling use for instance health checks? (Choose all
    that apply.)

    1. ELB health checks

    2. CloudWatch Alarms

    3. Route 53 health checks

    4. EC2 system checks

    5. EC2 instance checks

  8. You’re running an application that receives a spike in traffic on the first day of every
    month. You want to configure Auto Scaling to add more instances before the spike begins
    and then add additional instances in proportion to the CPU utilization of each instance.
    Which of the following should you implement? (Choose all that apply.)

    1. Target tracking policies

    2. Scheduled actions

    3. Step scaling policies

    4. Simple scaling policies

  9. Which of the following provide the most protection against data corruption and accidental
    deletion for existing objects stored in S3? (Choose two.)

    1. Versioning

    2. Bucket policies

    3. Cross-region replication

    4. Using the Standard storage class

  10. You need to maintain three days of backups for binary files stored across several EC2
    instances in a spot fleet. What’s the best way to achieve this?

    1. Stream the files to CloudWatch Logs.

    2. Create an Elastic File System and back up the files to it using a cron job.

    3. Create a Snapshot Lifecycle Policy to snapshot each instance every 24 hours and retain
      the latest three snapshots.

    4. Create a Snapshot Lifecycle Policy to snapshot each instance every 4 hours and retain
      the latest 18 snapshots.

      Review Questions 271


  11. You plan to run multi-AZ RDS across three availability zones in a region. You want to have
    two read replicas per zone. Which database engine should you choose?

    1. MySQL

    2. PostgreSQL

    3. MySQL

    4. Aurora

  12. You’re running an RDS instance in one availability zone. What should you implement to be
    able to achieve a recovery point objective (RPO) of five minutes?

    1. Configure multi-AZ.

    2. Enable automated snapshots.

    3. Add a read replica in the same region.

    4. Add a read replica in a different region.

  13. When creating subnets in a VPC, what are two reasons to leave sufficient space in the VPC
    for more subnets later? (Choose two.)

    1. You may need to add another tier for your application.

    2. You may need to implement RDS.

    3. AWS occasionally adds more availability zones to a region.

    4. You may need to add a secondary CIDR to the VPC.

  14. You plan to deploy 50 EC2 instances, each with two private IP addresses. To put all of these
    instances in a single subnet, which subnet CIDRs could you use? (Choose all that apply.)

    A. 172.21 0.0/25

    B. 172.21 0.0/26

    C. 10.0. 0.0/8

    D. 10.0. 0.0/21

  15. You’re currently connecting to your AWS resources using a 10 Gbps Internet connection at
    your office. You also have end users around the world who access the same AWS resources.
    What are two reasons you may consider using Direct Connect in addition to your Internet
    connection? (Choose two.)

    1. Lower latency

    2. Higher bandwidth

    3. Better end-user experience

    4. Increased security

  16. Before connecting a VPC to your data center, what must you do to ensure proper
    connectivity?

    1. Use IAM policies to restrict access to AWS resources.

    2. Ensure the IP address ranges in the networks don’t overlap.

      272 Chapter 10 The Reliability Pillar


    3. Ensure security groups on your data center firewalls are properly configured.

    4. Use in-transit encryption.

  17. You plan to run a stand-alone Linux application on AWS and need 99 percent availability.
    The application doesn’t require a database, and only a few users will access it. You will
    occasionally need to terminate and re-create the instance using a different AMI. Which of
    the following should you use? (Choose all that apply.)

    1. CloudFormation

    2. Auto Scaling

    3. User data

    4. Dynamic scaling policies

  18. You need eight instances running simultaneously in a single region. Assuming three avail-
    ability zones are available, what’s the minimum number of instances you must run in each
    zone to be able to withstand a single zone failure?

    1. 3

    2. 16

    3. 8

    4. 4

  19. If your application is down for 45 minutes a year, what is its approximate availability?

    1. 99 percent

    2. 99.9 percent

    3. 99.99 percent

    4. 99.95 percent

  20. You’re running an application in two regions and using multi-AZ RDS with read replicas
    in both regions. Users normally access the application in only one region by browsing to a
    public domain name that resolves to an elastic load balancer. If that region fails, which of
    the following should you do to fail over to the other region? (Choose all that apply.)

    1. Update the DNS record to point to the load balancer in the other region.

    2. Point the load balancer to the other region.

    3. Failover to the database in the other region.

    4. Restore the database from a snapshot.

Review Questions 297


Review Questions

  1. Which of the following are parameters used to describe the performance of specific EC2
    instance types? (Choose three.)

    1. ECUs (EC2 compute units)

    2. vCPUs (virtual CPUs)

    3. ACCpR (Aggregate Cumulative Cost per Request)

    4. Intel AES-NI

    5. Maximum read replicas

  2. As the popularity of your EC2-based application grows, you need to improve your infra-
    structure so it can better handle fluctuations in demand. Which of the following are nor-
    mally necessary components for successful Auto Scaling? (Choose three.)

    1. Launch configuration

    2. Load balancer

    3. Custom-defined EC2 AMI

    4. A start.sh script

    5. An AWS OpsWorks stack

  3. Which of the following best describes the role that launch configurations play in
    Auto Scaling?

    1. Define the capacity metric that will trigger a scaling change.

    2. Define the AMI to be deployed by Auto Scaling operations.

    3. Control the minimum and maximum number of instances to allow.

    4. Define the associated load balancer.

  4. You’re considering building your new e-commerce application using a microservices
    architecture where individual servers are tasked with separate but complementary
    tasks (document server, database, cache, etc.). Which of the following is probably the
    best platform?

    1. Elastic Container Service

    2. Lambda

    3. ECR

    4. Elastic Beanstalk

  5. Your EC2 deployment profile would benefit from a traditional RAID configuration for
    the EBS volumes you’re using. Where are RAID-optimized EBS volume configurations
    performed?

    1. From the EBS dashboard

    2. From the EC2 Storage Optimization dashboard

      298 Chapter 11 The Performance Efficiency Pillar


    3. From the AWS CLI

    4. From within the EC2 instance OS

  6. Which of the following tools will provide both low-latency access and resilience for your
    S3-based data?

    1. CloudFront

    2. RAID arrays

    3. Cross-region replication

    4. Transfer Acceleration

  7. Which of the following tools uses CloudFront edge locations to speed up data transfers?

    1. Amazon S3 Transfer Acceleration

    2. S3 Cross-Region Replication

    3. EBS Data Transfer Wizard

    4. EC2 Auto Scaling

  8. Your multi-tiered application has been experiencing slower than normal data reads and
    writes. As you work on improving performance, which of the following is
    not a major
    design consideration for a managed RDS database?

    1. Optimizing indexes

    2. Optimizing scalability

    3. Optimizing schemas

    4. Optimizing views

  9. Which of the following are possible advantages of hosting a relational database on an EC2
    instance over using the RDS service? (Choose two.)

    1. Automated software patches

    2. Automated OS updates

    3. Out of the box Auto Scaling

    4. Cost savings

    5. Greater host control

  10. You’ve received complaints from users that performance on your EC2-based graphics
    processing application is slower than normal. Demand has been rising over the past
    month or two, which could be a factor. Which of the following is the most likely to help?
    (Choose two.)

    1. Moving your application to Amazon Lightsail

    2. Switching to an EC2 instance that supports enhanced graphics

    3. Deploying Amazon Elasticsearch in front of your instance

    4. Increasing the instance limit on your Auto Scaling group

    5. Putting your application behind a CloudFront distribution

      Review Questions 299


  11. Which of the following load balancer types is optimized for TCP-based applications and
    preserves the source IP address?

    1. Application load balancer

    2. Classic load balancer

    3. Network load balancer

    4. Dynamic load balancer

  12. Which of the following can be used to configure a CloudFormation template?
    (Choose three.)

    1. The CloudFormation drag-and-drop interface

    2. Selecting a prebuilt sample template

    3. Importing a template from AWS CloudDeploy

    4. Creating your own JSON template document

    5. Importing a template from Systems Manager

  13. Which of the following details is not a necessary component of a CloudFormation con-
    figuration?

    1. Default node name

    2. Stack name

    3. Database name

    4. DBUser name

  14. Which of the following can be integrated into your AWS workflow through AWS
    OpsWorks? (Choose two.)

    1. Ansible

    2. Chef

    3. Terraform

    4. SaltStack

    5. Puppet

  15. Which of the following are important elements of a successful resource monitoring pro-
    tocol? (Choose two.)

    1. CloudWatch dashboards

    2. CloudWatch OneView

    3. SNS alerts

    4. AWS Config dashboards

  16. Which of the following will most enhance the value of the CloudWatch data your resources
    generate? (Choose two.)

    1. Predefined performance baselines

    2. Predefined key performance indicators (KPIs)

      300 Chapter 11 The Performance Efficiency Pillar


    3. Advance permission from AWS

    4. A complete record of your account’s resource configuration changes

    5. A running Service Catalog task

  17. Which of the following can be used to audit the changes made to your account and resource
    configurations?

    1. AWS CloudTrail

    2. AWS CloudWatch

    3. AWS CodePipeline

    4. AWS Config

  18. Which of the following caching engines can be integrated with Amazon ElastiCache?
    (Choose two.)

    1. Varnish

    2. Redis

    3. Memcached

    4. Nginx

  19. Which of the following use case scenarios are a good fit for caching using Redis and
    ElastiCache? (Choose two.)

    1. Your online application requires users’ session states to be saved and the behavior of all
      active users to be compared.

    2. Your online application needs the fastest operation available.

    3. Your admin is not familiar with caching and is looking for a relatively simple setup for
      a straightforward application performance improvement.

    4. You’re not sure what your application needs might be in a month or two, so you want
      to leave open as many options as possible.

  20. Which of the following database engines is not a candidate for read replicas within
    Amazon RDS?

    1. MySQL

    2. Oracle

    3. MariaDB

    4. PostgreSQL

Review Questions 329


Review Questions

  1. Which of the following options can you not set in a password policy? (Choose two.)

    1. Maximum length

    2. Require the use of numbers.

    3. Prevent multiple users from using the same password.

    4. Require an administrator to reset an expired password.

  2. An IAM user is attached to a customer-managed policy granting them sufficient access to
    carry out their duties. You want to require multifactor authentication (MFA) for this user to
    use the AWS CLI. What element should you change in the policy?

    1. Resource

    2. Condition

    3. Action

    4. Principal

  3. You created an IAM policy that another administrator subsequently modified. You need
    to restore the policy to its original state but don’t remember how it was configured. What
    should you do to restore the policy? (Choose two.)

    1. Consult CloudTrail global management event logs.

    2. Restore the policy from a snapshot.

    3. Consult CloudTrail data event logs.

    4. Revert to the previous policy version.

  4. An IAM user with full access to all EC2 actions in all regions assumes a role that has access
    to only the EC2
    RunInstances operation in the us-east-1 region. What will the user be
    able to do under the assumed role?

    1. Create a new instance in any region.

    2. Create a new instance in the us-east-1 region.

    3. Start an existing instance in the us-east-1 region.

    4. Start an existing instance in any region.

  5. Several objects in a S3 bucket are encrypted using a KMS customer master key. Which of
    the following will give an IAM user permission to decrypt these objects?

    1. Add the user to the key policy as a key user.

    2. Grant the user access to the key using an IAM policy.

    3. Add the user to the key policy as a key administrator.

    4. Add the user as a principal to the bucket policy.

      330 Chapter 12 The Security Pillar


  6. You run a public-facing application on EC2 instances. The application is backed by a data-
    base running on RDS. Users access it using multiple domain names that are hosted in Route

    53. You want to get an idea of what IP addresses are accessing your application. Which of
    the following would you stream to CloudWatch Logs to get this information?

    1. RDS logs

    2. DNS query logs

    3. VPC flow logs

    4. CloudTrail logs

  7. You’re running a web server that keeps a detailed log of web requests. You want to deter-
    mine which IP address has made the most requests in the last 24 hours. What should you do
    to accomplish this? (Choose two.)

    1. Create a metric filter.

    2. Stream the web server logs to CloudWatch Logs.

    3. Upload the web server log to S3.

    4. Use Athena to query the data.

  8. An application running on an EC2 instance has been updated to send large amounts of
    data to a server in your data center for backup. Previously, the instance generated very little
    traffic. Which GuardDuty finding type is this likely to trigger?

    1. Behavior

    2. Backdoor

    3. Stealth

    4. ResourceConsumption

  9. You’ve set up an AWS Config managed rule to check whether a particular security group
    is attached to every instance in a VPC. You receive an SNS notification that an instance is
    out of compliance. But when you check the instance a few hours later, the security group is

    attached. Which of the following may help explain the apparent discrepancy? (Choose two.)

    1. The AWS Config timeline

    2. Lambda logs

    3. CloudTrail management event logs

    4. VPC flow logs

  10. You want to use Amazon Inspector to analyze the security posture of your EC2 instances
    running Windows. Which rules package should you not use in your assessment?

    1. Common Vulnerabilities and Exposures

    2. Center for Internet Security Benchmarks

    3. Runtime Behavior Analysis

    4. Security Best Practices

      Review Questions 331


  11. You have a distributed application running in datacenters around the world. The applica-
    tion connects to a public Simple Queue Service (SQS) endpoint to send messages to a queue.
    How can you prevent an attacker from using this endpoint to gain unauthorized access to
    the queue? (Choose two.)

    1. Network access control lists

    2. Security groups

    3. IAM policies

    4. SQS access policies

  12. You’re using a public-facing application load balancer to forward traffic to EC2 instances in
    an Auto Scaling group. What can you do to ensure users on the Internet can reach the load
    balancer over HTTPS without reaching your instances directly? (Choose two.)

    1. Create a security group that allows all inbound traffic to TCP port 443.

    2. Attach the security group to the instances.

    3. Attach the security group to the load balancer.

    4. Remove the Internet gateway from the VPC.

    5. Create a security group that allows all inbound traffic to TCP port 80.

  13. You’re running a UDP-based application on an EC2 instance. How can you protect it
    against a DDoS attack?

    1. Place the instance behind a network load balancer.

    2. Implement a security group to restrict inbound access to the instance.

    3. Place the instance behind an application load balancer.

    4. Enable AWS Shield Standard.

  14. You’re running a web application on six EC2 instances behind a network load balancer.
    The web application uses a MySQL database. How can you protect your application against
    SQL injection attacks?

    1. Enable WAF.

    2. Assign elastic IP addresses to the instances.

    3. Place the instances behind an application load balancer.

    4. Block TCP port 3306.

  15. Which services protect against an HTTP flood attack?

    1. GuardDuty

    2. WAF

    3. Shield Standard

    4. Shield Advanced

      332 Chapter 12 The Security Pillar


  16. Your security policy requires that you use a KMS key for encrypting S3 objects. It further
    requires this key be rotated once a year and revoked when misuse is detected. Which key
    type should you use? (Choose two.)

    1. Customer-managed CMK

    2. AWS-managed CMK

    3. S3-managed key

    4. Customer-provided key

  17. A developer is designing an application to run on AWS and has asked for your input in
    deciding whether to use a SQL database or DynamoDB for storing highly transactional
    application data. Your security policy requires all application data to be encrypted and
    encryption keys to be rotated every 90 days. Which AWS service should you recommend for
    storing application data? (Choose two.)

    1. KMS

    2. RedShift

    3. DynamoDB

    4. RDS

  18. You need to copy the data from an unencrypted EBS volume to another region and encrypt
    it. How can you accomplish this? (Choose two.)

    1. Create an encrypted snapshot of the unencrypted volume.

    2. Simultaneously encrypt and copy the snapshot to the destination region.

    3. Copy the encrypted snapshot to the destination region.

    4. Create an unencrypted snapshot of the unencrypted volume.

  19. An instance with an unencrypted EBS volume has an unencrypted EFS filesystem mounted
    on it. You need to encrypt the data on an existing EFS filesystem using a KMS key. How
    can you accomplish this?

    1. Encrypt the EBS volume of the instance.

    2. Create a new encrypted EFS filesystem and copy the data to it.

    3. Enable encryption on the existing EFS filesystem.

    4. Use a third-party encryption program to encrypt the data.

  20. On which of the following can you not use an ACM-generated TLS certificate?
    (Choose two.)

    1. An S3 bucket

    2. A CloudFront distribution

    3. An application load balancer

    4. An EC2 instance

      Review Questions 333


  21. Which of the following assesses the security posture of your AWS resources against AWS
    best practices?

    1. Detective

    2. Macie

    3. Security Hub

    4. GuardDuty

Review Questions 349


Review Questions

  1. Which of the following best describes the AWS Free Tier?

    1. Free access to AWS services for a new account’s first month

    2. Free access to all instance types of AWS EC2 instances for new accounts

    3. Free access to basic levels of AWS services for a new account’s first year

    4. Unlimited and open-ended access to the “free tier” of most AWS services

  2. Which of the following storage classes provides the least expensive storage and

    transfer rates?

    1. Amazon S3 Glacier

    2. Amazon S3 Standard-Infrequent Access

    3. Amazon S3 Standard

    4. Amazon S3 One Zone-Infrequent Access

  3. Which AWS service is best suited to controlling your spending by sending email alerts?

    1. Cost Explorer

    2. Budgets

    3. Organizations

    4. TCO Calculator

  4. Your AWS infrastructure is growing and you’re beginning to have trouble keeping track
    of what you’re spending. Which AWS service is best suited to analyzing account usage
    data at scale?

    1. Trusted Advisor

    2. Cost Explorer

    3. Budgets

    4. Cost and Usage Reports

  5. Your company wants to more closely coordinate the administration of its multiple AWS
    accounts, and AWS Organizations can help it do that. How does that change your security
    profile? (Choose three.)

    1. An organization-level administration account breach is potentially more damaging.

    2. User permissions can be controlled centrally at the organization level.

    3. You should upgrade to use only specially hardened organization-level VPCs.

    4. Standard security best practices such as MFA and strong passwords are even more
      essential.

    5. You should upgrade all of your existing security groups to account for the changes.

      350 Chapter 13 The Cost Optimization Pillar


  6. Which of the following resource states are monitored by AWS Trusted Advisor?
    (Choose two.)

    1. Route 53 routing failures

    2. Running but idle EC2 instances

    3. S3 buckets with public read access permissions

    4. EC2 Linux instances that allow root account SSH access

    5. Unencrypted S3 bucket data transfers

  7. You’re planning a new AWS deployment, and your team is debating whether they’ll be
    better off using an RDS database or one run on an EC2 instance. Which of the following
    tools will be most helpful?

    1. TCO Calculator

    2. AWS Pricing Calculator

    3. Trusted Advisor

    4. Cost and Usage Reports

  8. Which of the following is not a metric you can configure an AWS budget to track?

    1. EBS volume capacity

    2. Resource usage

    3. Reserve instance coverage

    4. Resource cost

  9. Which of the following statements are true of cost allocation tags? (Choose two.)

    1. Tags can take up to 24 hours before they appear in the Billing and Cost Management
      dashboard.

    2. Tags can’t be applied to resources that were launched before the tags were created.

    3. You’re allowed five free budgets per account.

    4. You can activate and manage cost allocation tags from the Tag Editor page.

  10. Your online web store normally requires three EC2 instances to handle traffic but expe-
    riences a twofold increase in traffic for the two summer months. Which of the following
    approaches makes the most sense?

    1. Run three on-demand instances 12 months per year and schedule six reserve instances
      for the summer months.

    2. Run three spot instances for the summer months and three reserve instances 12
      months/year.

    3. Run nine reserve instances for 12 months/year.

    4. Run three reserve instances 12 months/year and purchase three scheduled reserve
      instances for the summer months.

      Review Questions 351


  11. Which of the following settings do you not need to provide when configuring a
    reserved instance?

    1. Payment option

    2. Standard or Convertible RI

    3. Interruption policy

    4. Tenancy

  12. Your new web application requires multiple EC2 instances running 24/7 and you’re going
    to purchase reserved instances. Which of the following payment options is the most expen-
    sive when configuring a reserve instance?

    1. All Upfront

    2. Partial Upfront

    3. No Upfront

    4. Monthly

  13. Which of the following benefits of containers such as Docker can significantly reduce your
    AWS compute costs? (Choose two.)

    1. Containers can launch quickly.

    2. Containers can deliver increased server density.

    3. Containers make it easy to reliably replicate server environments.

    4. Containers can run using less memory than physical machines.

  14. Which of the following is the best usage of an EC2 reserved instance?

    1. An application that will run continuously for six months straight

    2. An application that will run continuously for 36 months straight

    3. An application that runs only during local business hours

    4. An application that runs at unpredictable times and can survive unexpected shutdowns

  15. Which of the following describes “unused EC2 instances matching a particular set of
    launch specifications”?

    1. Request type

    2. Spot instance interruption

    3. Spot fleet

    4. Spot instance pool

  16. Which of the following best describes a spot instance interruption?

    1. A spot instance interruption occurs when the spot price rises above your maximum.

    2. A spot instance interruption is the termination of a spot Instance when its workload
      completes.

    3. A spot instance interruption occurs when a spot request is manually restarted.

    4. A spot instance interruption is the result of a temporary outage in an AWS data center.

      352 Chapter 13 The Cost Optimization Pillar


  17. Which of the following describes the maximum instances or vCPUs you want running?

    1. Spot instance pool

    2. Target capacity

    3. Spot maximum

    4. Spot cap

  18. You need to make sure your EBS volumes are regularly backed up, but you’re afraid you’ll
    forget to remove older snapshot versions, leading to expensive data bloat. What’s the best
    solution to this problem?

    1. Configure the EBS Lifecycle Manager.

    2. Create a script that will regularly invoke the AWS CLI to prune older snapshots.

    3. Configure an EBS Scheduled Reserved Instance.

    4. Tie a string to your finger.

    5. Configure an S3 Lifecycle configuration policy to remove old snapshots.

  19. Which of these AWS CLI commands will launch a spot fleet?

    1. aws ec2 request-fleet --spot-fleet-request-config file://Config

      .json

    2. aws ec2 spot-fleet --spot-fleet-request-config
      file://Config.json

    3. aws ec2 launch-spot-fleet --spot-fleet-request-config /
      file://Config.json

    4. aws ec2 request-spot-fleet --spot-fleet-request-config /
      file://Config.json

  20. Which of the following elements is not something you’d include in your spot fleet request?

    1. Availability zone

    2. Target capacity

    3. Platform (the instance OS)

    4. AMI

Review Questions 381


Review Questions

  1. When using CloudFormation to provision multiple stacks of related resources, by which of
    the following should you organize your resources into different stacks? (Choose two.)

    1. Cost

    2. S3 bucket

    3. Lifecycle

    4. Ownership

  2. Which of the following resource properties are good candidates for definition as parameters
    in a CloudFormation template? (Choose two.)

    1. AMI ID

    2. EC2 key pair name

    3. Stack name

    4. Logical ID

  3. You want to use nested stacks to create an EC2 Auto Scaling group and the supporting
    VPC infrastructure. These stacks do not need to pass any information to stacks outside of
    the nested stack hierarchy. Which of the following must you add to the template that creates
    the Auto Scaling group?

    1. An Export field to the Output section

    2. A resource of the type AWS::EC2::VPC

    3. A resource of the type AWS::CloudFormation::Stack

    4. Fn::ImportValue

  4. You need to update a stack that has a stack policy applied. What must you do to verify the
    specific resources CloudFormation will change before updating the stack?

    1. Create a change set.

    2. Perform a direct update.

    3. Update the stack policy.

    4. Override the stack policy.

  5. You’ve granted a developer’s IAM user permissions to read and write to a CodeCommit
    repository using Git. What information should you give the developer to access the reposi-
    tory as an IAM user?

    1. IAM username and password

    2. Access key and secret key

    3. Git username and password

    4. SSH public key

      382 Chapter 14 The Operational Excellence Pillar


  6. You need grant access to a specific CodeCommit repository to only one IAM user. How can
    you do this?

    1. Specify the repository’s clone URL in an IAM policy.

    2. Generate Git credentials only for the user.

    3. Specify the user’s ARN in a repository policy.

    4. Specify the repository’s ARN in an IAM policy.

  7. You need to store text-based documentation for your data center infrastructure. This doc-
    umentation changes frequently, and auditors need to be able to see how the documents
    change over time. The documents must also be encrypted at rest. Which service should you
    use to store the documents and why?

    1. CodeCommit, because it offers differencing

    2. S3, because it offers versioning

    3. S3, because it works with customer-managed KMS keys

    4. CodeCommit, because it works with customer-managed KMS keys

  8. Which command will download a CodeCommit repository?

    1. aws codecommit get-repository

    2. git clone

    3. git push

    4. git add

  9. You need to deploy an application using CodeDeploy. Where must you place your applica-
    tion files so that CodeDeploy can deploy them?

    1. An EBS snapshot

    2. A CodeCommit repository

    3. A self-hosted Git repository

    4. An S3 bucket

  10. Which deployment type requires an elastic load balancer?

    1. In-place instance deployment

    2. Blue/green instance deployment

    3. Blue/green Lambda deployment

    4. In-place Lambda deployment

  11. You want to use CodeDeploy to perform an in-place upgrade of an application running on
    five instances. You consider the entire deployment successful if the deployment succeeds
    even on only one instance. Which preconfigured deployment configuration should you use?

    1. OneAtATime

    2. HalfAtATime

    3. AllAtOnce

    4. OnlyOne

      Review Questions 383


  12. You want CodeDeploy to run a shell script that performs final checks against your appli-
    cation after allowing traffic to it and before declaring the deployment successful. Which
    lifecycle event hook should you use?

    1. ValidateService

    2. AfterAllowTraffic

    3. BeforeAllowTraffic

    4. AllowTraffic

  13. The build stage of your software development pipeline compiles Java source code into a
    binary JAR file that can be deployed to a web server. CodePipeline compresses this file and
    puts it in an S3 bucket. What’s the term for this compressed file?

    1. An artifact

    2. A provider

    3. An asset

    4. A snapshot

  14. You’re designing an automated continuous integration pipeline and want to ensure devel-
    opers don’t accidentally trigger a release to production when checking in code. What are
    two ways to accomplish this? (Choose two).

    1. Create a separate bucket in S3 to store artifacts for deployment.

    2. Implement an approval action before the deploy stage.

    3. Disable the transition to the deploy stage.

    4. Don’t allow developers access to the deployment artifact bucket.

  15. You have CloudFormation templates stored in a CodeCommit repository. Whenever
    someone updates a template, you want a new CloudFormation stack automatically
    deployed. How should you design a CodePipeline pipeline to achieve this? (Choose all
    that apply).

    1. Use a source action with the CodeCommit provider.

    2. Use a build action with the CloudFormation provider.

    3. Use a deploy action with the CodeCommit provider.

    4. Use a deploy action with the CloudFormation provider.

    5. Create a two-stage pipeline.

    6. Create a three-stage pipeline.

    7. Create a single-stage pipeline.

  16. How many stages can you have in a pipeline?

    1. 1

    2. 10

    3. 20

    4. 21

      384 Chapter 14 The Operational Excellence Pillar


  17. You need to manually take EBS snapshots of several hundred volumes. Which type of Sys-
    tems Manager document enables you to do this?

    1. Command

    2. Automation

    3. Policy

    4. Manual

  18. You want to use Systems Manager to perform routine administration tasks and collect
    software inventory on your EC2 instances running Amazon Linux. You already have an
    instance profile attached to these instances. Which of the following should you do to enable
    you to use Systems Manager for these tasks?

    1. Add the permissions from the AmazonEC2RoleforSSM managed policy to the role
      you’re using for the instance profile.

    2. Manually install the Systems Manager agent.

    3. Use Session Manager to install the Systems Manager agent.

    4. Modify the instance security groups to allow access from Systems Manager.

  19. You’ve configured Patch Manager to patch your Windows instances every Saturday. The
    custom patch baseline you’re using has a seven-day auto-approval delay for security-related
    patches. On this Monday, a critical security patch was released, and you want to push it to
    your instances as soon as possible. You also want to take the opportunity to install all other
    available security-related packages. How can you accomplish this? (Choose two).

    1. Execute the AWS-RunPatchBaseline document.

    2. Add the patch to the list of approved patches in the patch baseline.

    3. Change the maintenance window to occur every Monday at midnight.

    4. Set the patch baseline’s auto-approval delay to zero days.

  20. You’ve installed the Systems Manager agent on an Ubuntu instance and ensured the correct
    instance profile is applied. But Systems Manager Insights don’t display the current network
    configuration. Which of the following must you do to be able to automatically collect and
    view the network configuration for this and future instances in the same region?

    1. Make sure the instance is running.

    2. Create a global inventory association.

    3. Execute the AWS-GatherSoftwareInventory policy document against the instance.

    4. Execute the AWS-SetupManagedInstance automation document against the instance.


Answers to
Review Questions

Appendix

image

386 Appendix Answers to Review Questions


Chapter 1: Introduction to Cloud
Computing and AWS

  1. B. Elastic Beanstalk takes care of the ongoing underlying deployment details for you,
    allowing you to focus exclusively on your code. Lambda will respond to trigger events by
    running code a single time, Auto Scaling will ramp up existing infrastructure in response to
    demand, and Route 53 manages DNS and network routing.

  2. A. CloudFront maintains a network of endpoints where cached versions of your application
    data are stored to provide quicker responses to user requests. Route 53 manages DNS and
    network routing, Elastic Load Balancing routes incoming user requests among a cluster of
    available servers, and Glacier provides high-latency, low-cost file storage.

  3. D. Elastic Block Store provides virtual block devices (think: storage drives) on which you
    can install and run filesystems and data operations. It is not normally a cost-effective
    option for long-term data storage.

  4. A, C. AWS IAM lets you create user accounts, groups, and roles and assign them rights
    and permissions over specific services and resources within your AWS account. Directory
    Service allows you to integrate your resources with external users and resources through
    third-party authentication services. KMS is a tool for generating and managing encryption
    keys, and SWF is a tool for coordinating application tasks. Amazon Cognito can be used to
    manage authentication for your application users, but not your internal admin teams.

  5. C. DynamoDB provides a NoSQL (nonrelational) database service. Both are good for
    workloads that can be more efficiently run without the relational schema of SQL database
    engines (like those, including Aurora, that are offered by RDS). KMS is a tool for gener-
    ating and managing encryption keys.

  6. D. EC2 endpoints will always start with an ec2 prefix followed by the region designation
    (
    eu-west-1 in the case of Ireland).

  7. A. An availability zone is an isolated physical data center within an AWS region. Regions
    are geographic areas that contain multiple availability zones, subnets are IP address blocks
    that can be used within a zone to organize your networked resources, and there can be mul-
    tiple data centers within an availability zone.

  8. B. VPCs are virtualized network environments where you can control the connectivity of
    your EC2 (and RDS, etc.) infrastructure. Load Balancing routes incoming user requests
    among a cluster of available servers, CloudFront maintains a network of endpoints where
    cached versions of your application data are stored to provide quicker responses to user
    requests, and AWS endpoints are URIs that point to AWS resources within your account.

  9. C. The AWS service level agreement tells you the level of service availability you can realis-
    tically expect from a particular AWS service. You can use this information when assessing
    your compliance with external standards. Log records, though they can offer important
    historical performance metrics, probably won’t be enough to prove compliance. The AWS
    Compliance Programs page will show you only which regulatory programs can be satisfied
    with AWS resources, not whether a particular configuration will meet their demands.

    Chapter 2: Amazon Elastic Compute Cloud and Amazon Elastic Block Store 387


    The AWS Shared Responsibility Model outlines who is responsible for various elements of
    your AWS infrastructure. There is no AWS Program Compliance tool.

  10. B. The AWS Command Line Interface (CLI) is a tool for accessing AWS APIs from the com-
    mand-line shell of your local computer. The AWS SDK is for accessing resources program-
    matically, the AWS Console works graphically through your browser, and AWS Config is a
    service for editing and auditing your AWS account resources.

  11. A. Unlike the Basic and Developer plans (which allow access to a support associate to no or
    one user, respectively), the Business plan allows multiple team members.


Chapter 2: Amazon Elastic Compute
Cloud and Amazon Elastic Block Store

  1. A, C. Many third-party companies maintain official and supported AMIs running their
    software on the AWS Marketplace. AMIs hosted among the community AMIs are not
    always official and supported versions. Since your company will need multiple such
    instances, you’ll be better off automating the process by bootstrapping rather than hav-
    ing to configure the software manually each time. The Site-to-Site VPN tool doesn’t

    use OpenVPN.

  2. B, C. The VM Import/Export tool handles the secure and reliable transfer for a virtual
    machine between your AWS account and local data center. A successfully imported VM
    will appear among the private AMIs in the region you selected. Direct S3 uploads and SSH
    tunnels are not associated with VM Import/Export.

  3. D. AMIs are specific to a single AWS region and cannot be deployed into any other region.
    If your AWS CLI or its key pair was not configured properly, your connection would have
    failed completely. A public AMI being unavailable because it’s “updating” is theoretically
    possible but unlikely.

  4. A. Only Dedicated Host tenancy offers full isolation. Shared tenancy instances will often
    share hardware with operations belonging to other organizations. Dedicated instance
    tenancy instances may be hosted on the same physical server as other instances within
    your account.

  5. A, E. Reserve instances will give you the best price for instances you know will be running
    24/7, whereas on-demand makes the most sense for workloads that will run at unpredict-
    able times but can’t be shut down until they’re no longer needed. Load balancing controls
    traffic routing and, on its own, has no impact on your ability to meet changing demand.
    Since the m5.large instance type is all you need to meet normal workloads, you’ll be wast-
    ing money by running a larger type 24/7.

  6. B. Spot market instances can be shut down with only a minimal (two-minute) warning, so
    they’re not recommended for workloads that require reliably predictable service. Even if
    your AMI can be relaunched, the interrupted workload will still be lost. Static S3 websites
    don’t run on EC2 infrastructure in the first place.

    388 Appendix Answers to Review Questions


  7. A. You can edit or even add or remove security groups from running instances and the
    changes will take effect instantly. Similarly, you can associate or release an elastic IP
    address to/from a running instance. You can change an instance type as long as you shut
    down the instance first. But the AMI can’t be changed; you’ll need to create an entirely
    new instance.

  8. B. The first of two (and not three) strings in a resource tag is the key—the group to which
    the specific resource belongs. The second string is the
    value, which identifies the resource
    itself. If the key looks too much like the value, it can cause confusion.

  9. D. Provisioned-IOPS SSD volumes are currently the only type that comes close to 20,000
    IOPS. In fact, they can deliver up to 64,000 IOPS.

  10. B, C, E. Options B, C, and E are steps necessary for creating and sharing such an image.
    When an image is created, a snapshot is automatically created from which an AMI is
    built. You do not, however, create a snapshot from an image. The AWS Marketplace con-
    tains only public images: hopefully, no one will have uploaded your organization’s private
    image there!

  11. A, C. The fact that instance volumes are physically attached to the host server and add
    nothing to an instance cost is a benefit. The data on instance volumes is ephemeral and will
    be lost as soon as the instance is shut down. There is no way to set termination protection
    for instance volumes because they’re dependent on the lifecycle of their host instances.

  12. C, D. By default, EC2 uses the standard address blocks for private subnets, so all private
    addresses will fall within these ranges: 10.0.0.0 to 10.255.255.255, 172.16.0.0 to
    172.31.255.255, and 192.168.0.0 to 192.168.255.255.

  13. A, B, D. Ports and source and destinations addresses are considered by security group rules.
    Security group rules do not take packet size into consideration. Since a security group is
    directly associated with specific objects, there’s no need to reference the target address.

  14. A, D. IAM roles define how resources access other resources. Users cannot authenticate as
    an instance role, nor can a role be associated with an instance’s internal system process.

  15. B, D. NAT instances and NAT gateways are AWS tools for safely routing traffic between
    private and public subnets and from there, out to the Internet. An Internet gateway con-
    nects a VPC with the Internet, and a virtual private gateway connects a VPC with a remote
    site over a secure VPN. A stand-alone VPN wouldn’t normally be helpful for this purpose.

  16. D. The client computer in an encrypted operation must always use the private key to
    authenticate. For EC2 instances running Windows, you retrieve the password you’ll use for
    the GUI login using your private key.

  17. B. Placement groups allow you to specify where your EC2 instances will live. Load
    balancing directs external user requests between multiple EC2 instances, Systems Manager
    provides tools for monitoring and managing your resources, and Fargate is an interface for
    administering Docker containers on Amazon ECS.

    Chapter 3: AWS Storage 389


  18. A. Lambda can be used as such a trigger. Beanstalk launches and manages infrastructure
    for your application that will remain running until you manually stop it, ECS manages
    Docker containers but doesn’t necessarily stop them when a task is done, and Auto Scaling
    can
    add instances to an already running deployment to meet demand.

  19. C. VM Import/Export will do this. S3 buckets are used to store an image, but they’re not
    directly involved in the import operation. Snowball is a physical high-capacity storage
    device that Amazon ships to your office for you to load data and ship back. Direct Connect
    uses Amazon partner providers to build a high-speed connection between your servers and
    your AWS VPC.

  20. B. You can modify a launch template by creating a new version of it; however, the question
    indicates that the Auto Scaling group was created using a launch configuration. You can’t
    modify a launch configuration. Auto Scaling doesn’t use CloudFormation templates.

  21. A. Auto Scaling strives to maintain the number of instances specified in the desired capacity
    setting. If the desired capacity setting isn’t set, Auto Scaling will attempt to maintain the
    number of instances specified by the minimum group size. Given a desired capacity value

    of 5, there should be five healthy instances. If you manually terminate two of them, Auto
    Scaling will create two new ones to replace them. Auto Scaling will not adjust the desired
    capacity or minimum group size.

  22. B, C. Scheduled actions can adjust the minimum and maximum group sizes and the desired
    capacity on a schedule, which is useful when your application has a predictable load
    pattern. To add more instances in proportion to the aggregate CPU utilization of the group,
    implement step scaling policies. Target tracking policies adjust the desired capacity of a
    group to keep the threshold of a given metric near a predefined value. Simple scaling pol-
    icies simply add more instances when a defined CloudWatch alarm triggers, but the number
    of instances added is not proportional to the value of the metric.

  23. B. Automation documents let you perform actions against your AWS resources, including
    taking EBS snapshots. Although called automation documents, you can still manually exe-
    cute them. A command document performs actions within a Linux or Windows instance.
    A policy document works only with State Manager and can’t take an EBS snapshot. There’s
    no manual document type.


Chapter 3: AWS Storage

  1. A, C. Storage Gateway and EFS provide the required read/write access. S3 can be used to
    share files, but it doesn’t offer low-latency access—and its eventual consistency won’t work
    well with filesystems. EBS volumes can be used only for a single instance at a time.

  2. D. In theory, at least, there’s no limit to the data you can upload to a single bucket or
    to all the buckets in your account or to the number of times you upload (using the
    PUT
    command). By default, however, you are allowed only 100 S3 buckets per account.

    390 Appendix Answers to Review Questions


  3. A. HTTP (web) requests must address the s3.amazonaws.com domain along with the
    bucket and filenames.

  4. C. A prefix is the name common to the objects you want to group, and a slash character (/)
    can be used as a delimiter. The bar character (|) would be treated as part of the name rather
    than as a delimiter. Although DNS names can have prefixes, they’re not the same as pre-
    fixes in S3.

  5. A, C. Client-side encryption occurs before an object reaches the bucket (i.e., before it comes
    to rest in the bucket). Only AWS KMS-Managed Keys provide an audit trail. AWS End-to-
    End managed keys doesn’t exist as an AWS service.

  6. A, B, E. S3 server access logs don’t report the source bucket’s current size. They don’t track
    API calls—that’s something covered by AWS CloudTrail.

  7. C, E. The S3 guarantee only covers the physical infrastructure owned by AWS. Temporary
    service outages are related to “availability” and not “durability.”

  8. A. One Zone-IA data is heavily replicated but only within a single availability zone,
    whereas Reduced Redundancy data is only lightly replicated.

  9. B. The S3 Standard-IA (Infrequent Access) class is guaranteed to be available 99.9 percent
    of the time.

  10. D. S3 can’t guarantee instant consistency across their infrastructure for changes to existing
    objects, but there aren’t such concerns for newly created objects.

  11. C. Object versioning must be manually enabled for each object to prevent older versions of
    the object from being deleted.

  12. A. S3 lifecycle rules can incorporate specifying objects by prefix. There’s no such thing as a
    lifecycle template.

  13. A. Glacier offers the least expensive and most highly resilient storage within the AWS eco-
    system. Reduced Redundancy is not resilient and, in any case, is no longer recommended.
    S3 One Zone and S3 Standard are relatively expensive.

  14. B, C. ACLs are a legacy feature that isn’t as flexible as IAM or S3 bucket polices. Security
    groups are not used with S3 buckets. KMS is an encryption key management tool and isn’t
    used for authentication.

  15. D. In this context, a principal is an identity to which bucket access is assigned.

  16. B. The default expiry value for a presigned URL is 3,600 seconds (one hour).

  17. A, D. The AWS Certificate Manager can (when used as part of a CloudFront distribution)
    apply an SSL/TLS encryption certificate to your website. You can use Route 53 to associate
    a DNS domain name to your site. EC2 instances and RDS database instances would never
    be used for static websites. You would normally not use KMS for a static website—websites
    are usually meant to be public and encrypting the website assets with a KMS key would
    make it impossible for clients to download them.

    Chapter 4: Amazon Virtual Private Cloud 391


  18. B. As of this writing, a single Glacier archive can be no larger than 40 TB.

  19. C. Direct Connect can provide fast network connections to AWS, but it’s very expensive
    and can take up to 90 days to install. Server Migration Service and Storage Gateway aren’t
    meant for moving data at such scale.

  20. A. FSx for Lustre and Elastic File System are primarily designed for access from Linux file
    systems. EBS volumes can’t be accessed by more than a single instance at a time.


Chapter 4: Amazon Virtual Private Cloud

  1. A. The allowed range of prefix lengths for a VPC CIDR is between /16 and /28 inclusive.
    The maximum possible prefix length for an IP subnet is /32, so /56 is not a valid length.

  2. C. A secondary CIDR may come from the same RFC 1918 address range as the primary,
    but it may not overlap with the primary CIDR. 192.168.0.0/24 comes from the same
    address range (192.168.0.0–192.168.255.255) as the primary and does not overlap with
    192.168.16.0/24; 192.168.0.0/16 and 192.168.16.0/23 both overlap with 192.168.16.0/24;
    and 172.31.0.0/16 is not in the same range as the primary CIDR.

  3. A, D. Options A and D (10.0.0.0/24 and 10.0.0.0/23) are within the VPC CIDR and leave
    room for a second subnet; 10.0.0.0/8 is wrong because prefix lengths less than /16 aren’t
    allowed; and 10.0.0.0/16 doesn’t leave room for another subnet.

  4. B. Multiple subnets may exist in a single availability zone. A subnet cannot span avail-
    ability zones.

  5. A. Every ENI must have a primary private IP address. It can have secondary IP addresses,
    but all addresses must come from the subnet the ENI resides in. Once created, the ENI
    cannot be moved to a different subnet. An ENI can be created independently of an instance
    and later attached to an instance.

  6. D. Each VPC contains a default security group that can’t be deleted. You can create a secu-
    rity group by itself without attaching it to anything. But if you want to use it, you must
    attach it to an ENI. You also attach multiple security groups to the same ENI.

  7. A. An NACL is stateless, meaning it doesn’t track connection state. Every inbound rule
    must have a corresponding outbound rule to permit traffic, and vice versa. An NACL is
    attached to a subnet, whereas a security group is attached to an ENI. An NACL can be
    associated with multiple subnets, but a subnet can have only one NACL.

  8. D. An Internet gateway has no management IP address. It can be associated with only one
    VPC at a time and so cannot grant Internet access to instances in multiple VPCs. It is a
    logical VPC resource and not a virtual or physical router.

  9. A. The destination 0.0.0.0/0 matches all IP prefixes and hence covers all publicly accessible
    hosts on the Internet. ::0/0 is an IPv6 prefix, not an IPv4 prefix. An Internet gateway is the
    target of the default route, not the destination.

    392 Appendix Answers to Review Questions


  10. A. Every subnet is associated with the main route table by default. You can explicitly
    associate a subnet with another route table. There is no such thing as a default route table,
    but you can create a default route within a route table.

  11. A. An instance must have a public IP address to be directly reachable from the Internet. The
    instance may be able to reach the Internet via a NAT device. The instance won’t necessarily
    receive the same private IP address because it was automatically assigned. The instance will
    be able to reach other instances in the subnet because a public IP is not required.

  12. B. Assigning an EIP to an instance is a two-step process. First you must allocate an EIP,
    and then you must associate it with an ENI. You can’t allocate an ENI, and there’s no
    such thing as an instance’s primary EIP. Configuring the instance to use an automatically
    assigned public IP must occur at instance creation. Changing an ENI’s private IP to match

    an EIP doesn’t actually assign a public IP to the instance, because the ENI’s private address
    is still private.

  13. A. Internet-bound traffic from an instance with an automatically assigned public IP
    will traverse an Internet gateway that will perform NAT. The source address will be the

    instance’s public IP. An instance with an automatically assigned public IP cannot also have
    an EIP. The NAT process will replace the private IP source address with the public IP.
    Option D, 0.0.0.0, is not a valid source address.

  14. A. The NAT device’s default route must point to an Internet gateway, and the instance’s
    default route must point to the NAT device. No differing NACL configurations between
    subnets are required to use a NAT device. Security groups are applied at the ENI level. A
    NAT device doesn’t require multiple interfaces.

  15. D. A NAT gateway is a VPC resource that scales automatically to accommodate increased
    bandwidth requirements. A NAT instance can’t do this. A NAT gateway exists in only one
    availability zone. There are not multiple NAT gateway types. A NAT instance is a regular
    EC2 instance that comes in different types.

  16. A. An Internet gateway performs NAT for instances that have a public IP address. A route
    table defines how traffic from instances is forwarded. An EIP is a public IP address and
    can’t perform NAT. An ENI is a network interface and doesn’t perform NAT.

  17. A. The source/destination check on the NAT instance’s ENI must be disabled to allow the
    instance to receive traffic not destined for its IP and to send traffic using a source address
    that it doesn’t own. The NAT instance’s default route must point to an Internet gateway as
    the target. You can’t assign a primary private IP address after the instance is created.

  18. A. You cannot route through a VPC using transitive routing. Instead, you must directly
    peer the VPCs containing the instances that need to communicate. A VPC peering connec-
    tion uses the AWS internal network and requires no public IP address. Because a peering
    connection is a point-to-point connection, it can connect only two VPCs. A peering con-
    nection can be used only for instance-to-instance communication. You can’t use it to share
    other VPC resources.

  19. A, D. Each peered VPC needs a route to the CIDR of its peer; therefore, you must cre-
    ate two routes with the peering connection as the target. Creating only one route is not

    Chapter 5: Database Services 393


    sufficient to enable bidirectional communication. Additionally, the instances’ security
    groups must allow for bidirectional communication. You can’t create more than one peer-
    ing connection between a pair of VPCs.

  20. C. Interregion VPC peering connections aren’t available in all regions and support a
    maximum MTU of 1,500 bytes. You can use IPv4 across an inter-region peering connec-
    tion but not IPv6.

  21. B. VPN connections are always encrypted.

  22. A, C, D. VPC peering, transit gateways, and VPNs all allow EC2 instances in different
    regions to communicate using private IP addresses. Direct Connect is for connecting VPCs
    to on-premises networks, not for connecting VPCs together.

  23. B. A transit gateway route table can hold a blackhole route. If the transit gateway receives
    traffic that matches the route, it will drop the traffic.

  24. D. Tightly coupled workloads include simulations such as weather forecasting. They can’t
    be broken down into smaller, independent pieces, and so require the entire cluster to
    function as a single supercomputer.


Chapter 5: Database Services

  1. A, C. Different relational databases use different terminology. A row, record, and tuple all
    describe an ordered set of columns. An attribute is another term for column. A table con-
    tains rows and columns.

  2. C. A table must contain at least one attribute or column. Primary and foreign keys are used
    for relating data in different tables, but they’re not required. A row can exist within a table,
    but a table doesn’t need a row in order to exist.

  3. D. The SELECT statement retrieves data from a table. INSERT is used for adding data
    to a table.
    QUERY and SCAN are commands used by DynamoDB, which is a nonrela-
    tional database.

  4. B. Online transaction processing databases are designed to handle multiple transactions
    per second. Online analytics processing databases are for complex queries against large
    data sets. A key/value store such as DynamoDB can handle multiple transactions per
    second, but it’s not a relational database. There’s no such thing as an offline transaction
    processing database.

  5. B. Although there are six database engines to choose from, a single database instance can
    run only one database engine. If you want to run more than one database engine, you will
    need a separate database instance for each engine.

  6. B, C. MariaDB and Aurora are designed as binary drop-in replacements for MySQL.
    PostgreSQL is designed for compatibility with Oracle databases. Microsoft SQL Server does
    not support MySQL databases.

    394 Appendix Answers to Review Questions


  7. C. InnoDB is the only storage engine Amazon recommends for MySQL and MariaDB
    deployments in RDS and the only engine Aurora supports. MyISAM is another storage
    engine that works with MySQL but is not compatible with automated backups. XtraDB
    is another storage engine for MariaDB, but Amazon no longer recommends it. The
    PostgreSQL database engine uses its own storage engine by the same name and is not
    compatible with other database engines.

  8. A, C. All editions of the Oracle database engine support the bring-your-own-license model
    in RDS. Microsoft SQL Server and PostgreSQL only support the license-included model.

  9. B. Memory-optimized instances are EBS optimized, providing dedicated bandwidth for
    EBS storage. Standard instances are not EBS optimized and top out at 10,000 Mbps disk
    throughput. Burstable performance instances are designed for development and test work-
    loads and provide the lowest disk throughput of any instance class. There is no instance
    class called storage optimized.

  10. A. MariaDB has a page size of 16 KB. To write 200 MB (204,800 KB) of data every sec-
    ond, it would need 12,800 IOPS. Oracle, PostgreSQL, or Microsoft SQL Server, which all
    use an 8 KB page size, would need 25,600 IOPS to achieve the same throughput. When pro-
    visioning IOPS, you must specify IOPS in increments of 1,000, so 200 and 16 IOPS—which
    would be woefully insufficient anyway—are not valid answers.

  11. A. General-purpose SSD storage allocates three IOPS per gigabyte, up to 10,000 IOPS.
    Therefore, to get 600 IOPS, you’d need to allocate 200 GB. Allocating 100 GB would give
    you only 300 IOPS. The maximum storage size for gp2 storage is 16 TB, so 200 TB is not
    a valid value. The minimum amount of storage you can allocate depends on the database
    engine, but it’s no less than 20 GB, so 200 MB is not valid.

  12. C. When you provision IOPS using io1 storage, you must do so in a ratio no greater than 50
    IOPS for 1 GB. Allocating 240 GB of storage would give you 12,000 IOPS. Allocating 200
    GB of storage would fall short, yielding just 10,000 IOPS. Allocating 12 TB would be over-
    kill for the amount of storage required.

  13. A. A read replica only services queries and cannot write to a database. A standby database
    instance in a multi-AZ deployment does not accept queries. Both a primary and a master
    database instance can service queries and writes.

  14. D. Multi-AZ deployments using Oracle, PostgreSQL, MariaDB, MySQL, or Microsoft SQL
    Server replicate data synchronously from the primary to a standby instance. Only a multi-
    AZ deployment using Aurora uses a cluster volume and replicates data to a specific type of
    read replica called an Aurora replica.

  15. A. When you restore from a snapshot, RDS creates a new instance and doesn’t make any
    changes to the failed instance. A snapshot is a copy of the entire instance, not just a copy of
    the individual databases. RDS does not delete a snapshot after restoring from it.

  16. B. The ALL distribution style ensures every compute node has a complete copy of every
    table. The EVEN distribution style splits tables up evenly across all compute nodes. The
    KEY distribution style distributes data according to the value in a specified column. There
    is no distribution style called ODD.

  17. D. The dense compute type can store up to 326 TB of data on magnetic storage. The dense
    storage type can store up to 2 PB of data on solid state drives. A leader node coordinates
    communication among compute nodes but doesn’t store any databases. There is no such
    thing as a dense memory node type.

    Chapter 6: Authentication and Authorization—AWS Identity and Access Management 395


  18. A, B. In a nonrelational database, a primary key is required to uniquely identify an item
    and hence must be unique within a table. All primary key values within a table must have
    the same data type. Only relational databases use primary keys to correlate data across dif-
    ferent tables.

  19. B. An order date would not be unique within a table, so it would be inappropriate for
    a partition (hash) key or a simple primary key. It would be appropriate as a sort key, as

    DynamoDB would order items according to the order date, which would make it possible to
    query items with a specific date or within a date range.

  20. A. A single strongly consistent read of an item up to 4 KB consumes one read capacity unit.
    Hence, reading 11 KB of data per second using strongly consistent reads would consume
    three read capacity units. Were you to use eventually consistent reads, you would need only
    two read capacity units, as one eventually consistent read gives you up to 8 KB of data per
    second. Regardless, you must specify a read capacity of at least 1, so 0 is not a valid answer.

  21. B. The dense storage node type uses fast SSDs, whereas the dense compute node uses slower
    magnetic storage. The leader node doesn’t access the database but coordinates communica-
    tion among compute nodes. KEY is a data distribution strategy Redshift uses, but there is
    no such thing as a key node.

  22. D. When you create a table, you can choose to create a global secondary index with a dif-
    ferent partition and hash key. A local secondary index can be created after the table is cre-
    ated, but the partition key must be the same as the base table, although the hash key can be
    different. There is no such thing as a global primary index or eventually consistent index.

  23. B. NoSQL databases are optimized for queries against a primary key. If you need to query
    data based only on one attribute, you’d make that attribute the primary key. NoSQL data-
    bases are not designed for complex queries. Both NoSQL and relational databases can store
    JSON documents, and both database types can be used by different applications.

  24. D. A graph database is a type of nonrelational database that discovers relationships among
    items. A document-oriented store is a nonrelational database that analyzes and extracts
    data from documents. Relational databases can enforce relationships between records but
    don’t discover them. A SQL database is a type of relational database.


Chapter 6: Authentication and
Authorization—AWS Identity and Access
Management

  1. C. Although each of the other options represents possible concerns, none of them carries
    consequences as disastrous as the complete loss of control over your account.

  2. B. The * character does, indeed, represent global application. The Action element refers to
    the kind of action requested (list, create, etc.), the Resource element refers to the particular
    AWS account resource that’s the target of the policy, and the Effect element refers to the
    way IAM should react to a request.

    396 Appendix Answers to Review Questions


  3. A, B, C. Unless there’s a policy that explicitly allows an action, it will be denied. Therefore,
    a user with no policies or with a policy permitting S3 actions doesn’t permit EC2 instance
    permissions. Similarly, when two policies conflict, the more restrictive will be honored. The
    AdministratorAccess policy opens up nearly all AWS resources, including EC2. There’s no
    such thing as an IAM action statement.

  4. B, C. If you don’t perform any administration operations with regular IAM users, then
    there really is no point for them to exist. Similarly, without access keys, there’s a limit to
    what a user will be able to accomplish. Ideally, all users should use MFA and strong pass-
    words. The AWS CLI is an important tool, but it isn’t necessarily the most secure.

  5. D. The top-level command is iam, and the correct subcommand is

    get-access-key-last-used. The parameter is identified by --access-last-key-id.
    Parameters (not subcommands) are always prefixed with
    -- characters.

  6. B. IAM groups are primarily about simplifying administration. It has no direct impact
    on resource usage or response times and only an indirect impact on locking down the
    root user.

  7. C. X.509 certificates are used for encrypting SOAP requests, not authentication. The other
    choices are all valid identities within the context of an IAM role.

  8. A. AWS CloudHSM provides encryption that’s FIPS 140-2 compliant. Key Management
    Service manages encryption infrastructure but isn’t FIPS 140-2 compliant. Security Token
    Service is used to issue tokens for valid IAM roles, and Secrets Manager handles secrets for
    third-party services or databases.

  9. B. AWS Directory Service for Microsoft Active Directory provides Active Directory authen-
    tication within a VPC environment. Amazon Cognito provides user administration for your
    applications. AWS Secrets Manager handles secrets for third-party services or databases.
    AWS Key Management Service manages encryption infrastructure.

  10. A. Identity pools provide temporary access to defined AWS services to your application
    users. Sign-up and sign-in is managed through Cognito user pools. KMS and/or CloudHSM
    provide encryption infrastructure. Credential delivery to databases or third-party applica-
    tions is provided by AWS Secrets Manager.

  11. A, D, E. Options A, D, and E are appropriate steps. Your IAM policies will be as effective
    as ever, even if outsiders know your policies. Since even an account’s root user would never
    have known other users’ passwords, there’s no reason to change them.

  12. B. IAM policies are global—they’re not restricted to any one region. Policies do, however,
    require an action (like create buckets), an effect (allow), and a resource (S3).

  13. B, C. IAM roles require a defined trusted entity and at least one policy. However, the rele-
    vant actions are defined by the policies you choose, and roles themselves are uninterested in
    which applications use them.

  14. D. STS tokens are used as temporary credentials to external identities for resource access to
    IAM roles. Users and groups would not use tokens to authenticate, and policies are used to
    define the access a token will provide, not the recipient of the access.

  15. C. Policies must be written in JSON format.

    Chapter 7: CloudTrail, CloudWatch, and AWS Config 397


  16. B, D. The correct Resource line would read "Resource": "*". And the correct Action
    line would read
    "Action": "*". There is no "Target" line in an IAM policy. "Permit"
    is not a valid value for "Effect".

  17. B. User pools provide sign-up and sign-in for your application’s users. Temporary access
    to defined AWS services to your application users is provided by identity pools. KMS and/

    or CloudHSM provide encryption infrastructure. Credential delivery to databases or third-
    party applications is provided by AWS Secrets Manager.

  18. C, D. An AWS managed service takes care of all underlying infrastructure management
    for you. In this case, that will include data replication and software updates. On-premises

    integration and multi-AZ deployment are important infrastructure features, but they’re not
    unique to “managed” services.

  19. B, C, D. Options B, C, and D are all parts of the key rotation process. In this context, key
    usage monitoring is only useful to ensure that none of your applications is still using an old
    key that’s set to be retired. X.509 certificates aren’t used for access keys.

  20. A. You attach IAM roles to services in order to give them permissions over resources in
    other services within your account.


Chapter 7: CloudTrail, CloudWatch, and
AWS Config

  1. B, D. Creating a bucket and subnet are API actions, regardless of whether they’re performed
    from the web console or AWS CLI. Uploading an object to an S3 bucket is a data event, not
    a management event. Logging into the AWS console is a non-API management event.

  2. C. Data events include S3 object-level activity and Lambda function executions. Download-
    ing an object from S3 is a read-only event. Uploading a file to an S3 bucket is a write-only
    event and hence would not be logged by the trail. Viewing an S3 bucket and creating a
    Lambda function are management events, not data events.

  3. C. CloudTrail stores 90 days of event history for each region, regardless of whether a trail
    is configured. Event history is specific to the events occurring in that region. Because the
    trail was configured to log read-only management events, the trail logs would not contain a
    record of the trail’s deletion. They might contain a record of who viewed the trail, but that
    would be insufficient to establish who deleted it. There is no such thing as an IAM user log.

  4. B. CloudWatch uses dimensions to uniquely identify metrics with the same name and
    namespace. Metrics in the same namespace will necessarily be in the same region. The data
    point of a metric and the timestamp that it contains are not unique and can’t be used to
    uniquely identify a metric.

  5. C. Basic monitoring sends metrics every five minutes, whereas detailed monitoring sends
    them every minute. CloudWatch can store metrics at regular or high resolution, but this
    affects how the metric is timestamped, rather than the frequency with which it’s delivered
    to CloudWatch.

    398 Appendix Answers to Review Questions


  6. A. CloudWatch can store high-resolution metrics at subminute resolution. Therefore, updat-
    ing a metric at 15:57:08 and again at 15:57:37 will result in CloudWatch storing two sep-
    arate data points. Only if the metric were regular resolution would CloudWatch overwrite
    an earlier data point with a later one. Under no circumstances would CloudWatch ignore a
    metric update.

  7. D. Metrics stored at one-hour resolution age out after 15 months. Five-minute resolutions
    are stored for 63 days. One-minute resolution metrics are stored for 15 days. High-resolu-
    tion metrics are kept for 3 hours.

  8. A. To graph a metric’s data points, specify the Sum statistic and set the period equal to the
    metric’s resolution, which in this case is five minutes. Graphing the Sum or Average sta-
    tistic over a one-hour period will not graph the metric’s data points but rather the Sum or
    Average of those data points over a one-hour period. Using the Sample count statistic over
    a five-minute period will yield a value of 1 for each period, since there’s only one data point
    per period.

  9. B. CloudWatch uses a log stream to store log events from a single source. Log groups store
    and organize log streams but do not directly store log events. A metric filter extracts met-
    rics from logs but doesn’t store anything. The CloudWatch agent can deliver logs to Cloud-
    Watch from a server but doesn’t store logs.

  10. A, D. Every log stream must be in a log group. The retention period setting of a log group
    controls how long CloudWatch retains log events within those streams. You can’t manually
    delete log events individually, but you can delete all events in a log stream by deleting the
    stream. You can’t set a retention period on a log stream directly.

  11. A, C. CloudTrail will not stream events greater than 256 KB in size. There’s also a normal
    delay, typically up to 15 minutes, before an event appears in a CloudWatch log stream.
    Metric filters have no bearing on what log events get put into a log stream. Although a
    misconfigured or missing IAM role would prevent CloudTrail from streaming logs to
    CloudWatch, the question indicates that some events are present. Hence, the IAM role is
    correctly configured.

  12. B, D. If an EBS volume isn’t attached to a running instance, EBS won’t generate any met-
    rics to send to CloudWatch. Hence, the alarm won’t be able to collect enough data points
    to alarm. The evaluation period can be no more than 24 hours, and the alarm was created
    two days ago, so the evaluation period has elapsed. The data points to monitor don’t have
    to cross the threshold for CloudWatch to determine the alarm state.

  13. B. To have CloudWatch treat missing data as exceeding the threshold, set the Treat Missing
    Data As option to Breaching. Setting it to Not Breaching will have the opposite effect.

    Setting it to As Missing will cause CloudWatch to ignore the missing data and behave as
    if those evaluation periods didn’t occur. The Ignore option causes the alarm not to change
    state in response to missing data. There’s no option to treat missing data as Not Missing.

  14. C, D. CloudWatch can use the Simple Notification Service to send a text message.
    CloudWatch refers to this as a Notification action. To reboot an instance, you must
    use an EC2 action. The Auto Scaling action will not reboot an instance. SMS is not a
    valid CloudWatch alarm action.

    Chapter 8: The Domain Name System and Network Routing 399


  15. A. The recover action is useful when there’s a problem with an instance that requires AWS
    involvement to repair, such as a hardware failure. The recover action migrates the same in-
    stance to a new host. Rebooting an instance assumes the instance is running and entails the
    instance remaining on the same host. Recovering an instance does not involve restoring any
    data from a snapshot, as the instance retains the same EBS volume(s).

  16. B. If CloudTrail were logging write-only management events in the same region as the
    instance, it would have generated trail logs containing the deletion event. Deleting a log
    stream containing CloudTrail events does not delete those events from the trail logs stored
    in S3. Deleting an EC2 instance is not an IAM event. If AWS Config were tracking changes
    to EC2 instances in the region, it would have recorded a timestamped configuration item
    for the deletion, but it would not include the principal that deleted the instance.

  17. B, C, D. The delivery channel must include an S3 bucket name and may specify an
    SNS topic and the delivery frequency of configuration snapshots. You can’t specify a
    CloudWatch log stream.

  18. D. You can’t delete configuration items manually, but you can have AWS Config delete them
    after no less than 30 days. Pausing or deleting the configuration recorder will stop AWS
    Config from recording new changes but will not delete configuration items. Deleting config-
    uration snapshots, which are objects stored in S3, will not delete the configuration items.

  19. C, D. CloudWatch can graph only a time series. METRICS()/AVG(m1) and m1/m2
    both return a time series. AVG(m1)-m1 and AVG(m1) return scalar values and can’t be
    graphed directly.

  20. B. Deleting the rule will prevent AWS Config from evaluating resources configurations
    against it. Turning off the configuration recorder won’t prevent AWS Config from evalu-
    ating the rule. It’s not possible to delete the configuration history for a resource from AWS
    Config. When you specify a frequency for periodic checks, you must specify a valid fre-
    quency, or else AWS Config will not accept the configuration.

  21. B. EventBridge can take an action in response to an event, such as an EC2 instance launch.
    CloudWatch Alarms can take an action based only on a metric. CloudTrail logs events but
    doesn’t generate any alerts by itself. CloudWatch Metrics is used for graphing metrics.


Chapter 8: The Domain Name System
and Network Routing: Amazon Route 53
and Amazon CloudFront

  1. A. Option A is the correct answer. Name servers resolve IP addresses from domain names,
    allowing clients to connect to resources. Domain registration is performed by domain name
    registrars. Routing policies are applied through record sets within hosted zones.

  2. C. A domain is a set of resources identified by a single domain name. FQDN stands for
    fully qualified domain name. Policies for resolving requests are called routing policies.

    400 Appendix Answers to Review Questions


  3. D. The rightmost section of an FQDN address is the TLD. aws. would be a subdomain
    or host,
    amazon. is the SLD, and amazon.com/documentation/ points to a resource
    stored at the web root of the domain server.

  4. A. CNAME is a record type. TTL, record type, and record data are all configuration ele-
    ments, not record types.

  5. C. An A record maps a hostname to an IPv4 address. NS records identify name servers.
    SOA records document start of authority data. CNAME records define one hostname as an
    alias for another.

  6. A, C, D. Route 53 provides domain registration, health checks, and DNS management.
    Content delivery network services are provided by CloudFront. Secure and fast network
    connections to a VPC can be created using AWS Direct Connect.

  7. C. Geolocation can control routing by the geographic origin of the request. The simple
    policy sends traffic to a single resource. Latency sends content using the fastest origin
    resource. Multivalue can be used to make a deployment more highly available.

  8. A. Latency selects the available resource with the lowest latency. Weighted policies route
    among multiple resources by percentage. Geolocation tailors request responses to the
    end user’s location but isn’t concerned with response speed. Failover incorporates backup
    resources for higher availability.

  9. B. Weighted policies route among multiple resources by percentage. Failover incorporates
    backup resources for higher availability. Latency selects the available resource with the
    lowest latency. Geolocation tailors request responses to the end user’s location.

  10. D. Failover incorporates backup resources for higher availability. Latency selects the avail-
    able resource with the lowest latency. Weighted policies route among multiple resources by
    percentage. Geolocation tailors request responses to the end user’s location.

  11. A, D. Public and private hosting zones are real options. Regional, hybrid, and VPC zones
    don’t exist (although private zones do map to VPCs).

  12. A, B. To transfer a domain, you’ll need to make sure the domain isn’t set to locked.
    You’ll also need an authorization code that you’ll provide to Route 53. Copying name
    server addresses is necessary only for managing domains that are hosted on but not

    registered with Route 53. CNAME record sets are used to define one hostname as an alias
    for another.

  13. B. You can enable remotely registered domains on Route 53 by copying name server
    addresses into the remote registrar-provided interface (not the other way around). Making
    sure the domain isn’t set to locked and requesting authorization codes are used to
    transfer ato Route 53, not just to manage the routing. CNAME record sets are used to define
    one hostname as an alias for another.

  14. C. You specify the web page that you want used for testing when you configure your health
    check. There is no default page. Remote SSH sessions would be impossible for a number of
    reasons and wouldn’t definitively confirm a running resource in any case.

    Chapter 9: Simple Queue Service and Kinesis 401


  15. A. Geoproximity is about precisely pinpointing users, whereas geolocation uses geopolitical
    boundaries.

  16. A, D. CloudFront is optimized for handling heavy download traffic and for caching website
    content. Users on a single corporate campus or accessing resources through a VPN will not
    benefit from the distributed delivery provided by CloudFront.

  17. C. API Gateway is used to generate custom client SDKs for your APIs to connect your back-
    end systems to mobile, web, and server applications or services.

  18. A. Choosing a price class offering limited distribution is the best way to reduce costs. Non-
    HTTPS traffic can be excluded (thereby saving some money) but not through the configu-
    ration of an SSL certificate (you’d need further configuration). Disabling Alternate Domain
    Names or enabling Compress Objects Automatically won’t reduce costs.

  19. C. Not every CloudFront distribution is optimized for low-latency service. Requests of an
    edge location will only achieve lower latency after copies of your origin files are already
    cached. Therefore, a response to the first request might not be fast because CloudFront still
    has to copy the file from the origin server.

  20. B. RTMP distributions can manage content only from S3 buckets. RTMP is intended for
    the distribution of video content.


Chapter 9: Simple Queue Service
and Kinesis

  1. C, D. After a consumer grabs a message, the message is not deleted. Instead, the message
    becomes invisible to other consumers for the duration of the visibility timeout. The message
    is automatically deleted from the queue after it’s been in there for the duration of the reten-
    tion period.

  2. B. The default visibility timeout for a queue is 30 seconds. It can be configured to between
    0 seconds and 12 hours.

  3. D. The default retention period is 4 days but can be set to between 1 minute and 14 days.

  4. B. You can use a message timer to hide a message for up to 15 minutes. Per-queue delay set-
    tings apply to all messages in the queue unless you specifically override the setting using a
    message timer.

  5. B. A standard queue can handle up to 120,000 in-flight messages. A FIFO queue can
    handle up to about 20,000. Delay and short are not valid queue types.

  6. A. FIFO queues always deliver messages in the order they were received. Standard queues
    usually do as well, but they’re not guaranteed to. LIFO, FILO, and basic aren’t valid
    queue types.

    402 Appendix Answers to Review Questions


  7. C. Standard queues may occasionally deliver a message more than once. FIFO queues will
    not. Using long polling alone doesn’t result in duplicate messages.

  8. B. Short polling, which is the default, may occasionally fail to deliver messages. To ensure
    delivery of these messages, use long polling.

  9. D. Dead-letter queues are for messages that a consumer is unable to process. To use a
    dead-letter queue, you create a queue of the same type as the source queue, and set the

    maxReceiveCount to the maximum number of times a message can be received before it’s
    moved to the dead-letter queue.

  10. C. If the retention period for the dead-letter queue is 10 days, and a message is already 6
    days old when it’s moved to the dead-letter queue, it will spend at most 4 days in the dead-
    letter queue before being deleted.

  11. B. Kinesis Video Streams is designed to work with time-indexed data such as RADAR
    images. Kinesis ML doesn’t exist.

  12. A, C. You can’t specify a retention period over 7 days, so your only option is to create a
    Kinesis Data Firehose delivery stream that receives data from the Kinesis Data Stream and
    sends the data to an S3 bucket.

  13. C. Kinesis Data Firehose requires you to specify a destination for a delivery stream. Kinesis
    Video Streams and Kinesis Data Streams use a producer-consumer model that allows con-
    sumers to subscribe to a stream. There is no such thing as Kinesis Data Warehouse.

  14. B. The Amazon Kinesis Agent can automatically stream the contents of a file to Kinesis.
    There’s no need to write any custom code or move the application to EC2. The CloudWatch
    Logs Agent can’t send logs to a Kinesis Data Stream.

  15. C. SQS and Kinesis Data Streams are similar. But SQS is designed to temporarily hold
    a small message until a single consumer processes it, whereas Kinesis Data Streams

    is designed to provide durable storage and playback of large data streams to multiple
    consumers.

  16. B, C. You should stream the log data to Kinesis Data Streams and then have Kinesis Data
    Firehose consume the data and stream it to Redshift.

  17. C. Kinesis is for streaming data such as stock feeds and video. Static websites are not
    streaming data.

  18. B. Shards determine the capacity of a Kinesis Data Stream. A single shard gives you writes
    of up to 1 MB per second, so you’d need two shards to get 2 MB of throughput.

  19. A. Shards determine the capacity of a Kinesis Data Stream. Each shard supports 2 MB of
    reads per second. Because consumers are already receiving a total of 3 MB per second, it
    implies you have at least two shards already configured, supporting a total of 4 MB per sec-
    ond. Therefore, to support 5 MB per second you need to add just one more shard.

  20. A. Kinesis Data Firehose is designed to funnel streaming data to big data applications, such
    as Redshift or Hadoop. It’s not designed for videoconferencing.

    Chapter 10: The Reliability Pillar 403


    Chapter 10: The Reliability Pillar

    1. C. Availability of 99.95 percent translates to about 22 minutes of downtime per month, or
      4 hours and 23 minutes per year. Availability of 99.999 percent is less than 30 seconds of
      downtime per month, but the question calls for the minimum level of availability. Avail-
      ability of 99 percent yields more than 7 hours of downtime per month, whereas 99.9 per-
      cent is more than 43 minutes of downtime per month.

    2. A. The EC2 instances are redundant components, so to calculate their availability, you mul-
      tiply the component failure rates and subtract the product from 100 percent. In this case,
      100% – (10% × 10%) = 99%. Because the database represents a hard dependency, you mul-
      tiply the availability of the EC2 instances by the availability of the RDS instance, which is
      95 percent. In this case, 99% × 95% = 94.05%. A total availability of 99 percent may seem
      intuitive, but because the redundant EC2 instances have a hard dependency on the RDS in-
      stance, you must multiple the availabilities together. A total availability of 99.99 percent is
      unachievable since it’s well above the availability of any of the components.

    3. B. DynamoDB offers 99.99 percent availability and low latency. Because it’s distributed,
      data is stored across multiple availability zones. You can also use DynamoDB global tables
      to achieve even higher availability: 99.999 percent. Multi-AZ RDS offerings can provide
      low latency performance, particularly when using Aurora, but the guaranteed availability
      is capped at 99.95 percent. Hosting your own SQL database isn’t a good option because,
      although you could theoretically achieve high availability, it would come at the cost of
      significant time and effort.

    4. B, D. One cause of application failures is resource exhaustion. By scoping out large enough
      instances and scaling out to make sure you have enough of them, you can prevent failure
      and thus increase availability. Scaling instances in may help with cost savings but won’t
      help availability. Storing web assets in S3 instead of hosting them from an instance can help
      with performance but won’t have an impact on availability.

    5. B. You can modify a launch template by creating a new version of it; however, the question
      indicates that the Auto Scaling group was created using a launch configuration. You can’t
      modify a launch configuration. Auto Scaling doesn’t use CloudFormation templates.

    6. A. Auto Scaling strives to maintain the number of instances specified in the desired capacity
      setting. If the desired capacity setting isn’t set, Auto Scaling will attempt to maintain the
      number of instances specified by the minimum group size. Given a desired capacity of 5,
      there should be five healthy instances. If you manually terminate two of them, Auto Scaling
      will create two new ones to replace them. Auto Scaling will not adjust the desired capacity
      or minimum group size.

    7. A, D, E. Auto Scaling monitors the health of instances in the group using either ELB or
      EC2 instance and system checks. It can’t use Route 53 health checks. Dynamic scaling
      policies can use CloudWatch Alarms, but these are unrelated to checking the health of
      instances.

      404 Appendix Answers to Review Questions


    8. B, C. Scheduled actions can adjust the minimum and maximum group sizes and the desired
      capacity on a schedule, which is useful when your application has a predictable load
      pattern. To add more instances in proportion to the aggregate CPU utilization of the group,
      implement step scaling policies. Target tracking policies adjust the desired capacity of a
      group to keep the threshold of a given metric near a predefined value. Simple scaling pol-
      icies simply add more instances when a defined CloudWatch alarm triggers, but the number
      of instances added is not proportional to the value of the metric.

    9. A, D. Enabling versioning protects objects against data corruption and deletion by keep-
      ing before and after copies of every object. The Standard storage class replicates objects
      across multiple availability zones in a region, guarding against the failure of an entire zone.
      Bucket policies may protect against accidental deletion, but they don’t guard against data
      corruption. Cross-region replication applies to new objects, not existing ones.

    10. C. The Data Lifecycle Manager can automatically create snapshots of an EBS volume every
      12 or 24 hours and retain up to 1,000 snapshots. Backing up files to EFS is not an option
      because a spot instance may terminate before the
      cron job has a chance to complete.
      CloudWatch Logs doesn’t support storing binary files.

    11. D. Aurora allows you to have up to 15 replicas. MariaDB, MySQL, and PostgreSQL allow
      you to have only up to five.

    12. B. When you enable automated snapshots, RDS backs up database transaction logs about
      every five minutes. Configuring multi-AZ will enable synchronous replication between the
      two instances, but this is useful for avoiding failures and is unrelated to the time it takes to
      recover a database. Read replicas are not appropriate for disaster recovery because data is
      copied to them asynchronously, and there can be a significant delay in replication, resulting
      in an RPO of well over five minutes.

    13. A, C. AWS sometimes adds additional availability zones to a region. To take advantage of
      a new zone, you’ll need to be able to add a new subnet in it. You also may decide later that
      you may need another subnet or tier for segmentation or security purposes. RDS doesn’t
      require a separate subnet. It can share the same subnet with other VPC resources. Adding a
      secondary CIDR to a VPC doesn’t require adding another subnet.

    14. A, D. Fifty EC2 instances, each with two private IP addresses, would consume 100 IP
      addresses in a subnet. Additionally, AWS reserves five IP addresses in every subnet. The sub-
      net therefore must be large enough to hold 105 IP addresses. 172.21.0.0/25 and 10.0.0.0/21
      are sufficiently large. 172.21.0.0/26 allows room for only 63 IP addresses. 10.0.0.0/8 is
      large enough, but a subnet prefix length must be at least /16.

    15. A, D. Direct Connect offers consistent speeds and latency to the AWS cloud. Because Direct
      Connect bypasses the public Internet, it’s more secure. For speeds, you can choose 1 Gbps
      or 10 Gbps, so Direct Connect wouldn’t offer a bandwidth increase over using the existing
      10 Gbps Internet connection. Adding a Direct Connect connection wouldn’t have an effect
      on end-user experience, since they would still use the Internet to reach your AWS resources.

      Chapter 11: The Performance Efficiency Pillar 405


    16. B. When connecting a VPC to an external network, whether via a VPN connection or
      Direct Connect, make sure the IP address ranges don’t overlap. In-transit encryption,
      though useful for securing network traffic, isn’t required for proper connectivity. IAM
      policies restrict API access to AWS resources, but this is unrelated to network connec-
      tivity. Security groups are VPC constructs and aren’t something you configure on a data
      center firewall.

    17. A, C. CloudFormation lets you provision and configure EC2 instances by defining your
      infrastructure as code. This lets you update the AMI easily and build a new instance from
      it as needed. You can include application installation scripts in the user data to automate
      the build process. Auto Scaling isn’t appropriate for this scenario because you’re going

      to sometimes terminate and re-create the instance. Dynamic scaling policies are part of
      Auto Scaling,

    18. D. By running four instances in each zone, you have a total of 12 instances in the region. If
      one zone fails, you lose four of those instances and are left with eight. Running eight or 16
      instances in each zone would allow you to withstand one zone failure, but the question asks
      for the minimum number of instances. Three instances per zone would give you nine total
      in the region, but if one zone fails, you’d be left with only six.

    19. C. Availability of 99.99 percent corresponds to about 52 minutes of downtime per year; 99
      percent, 99.9 percent, and 99.95 percent entail significantly more downtime.

    20. A, C. Because users access a public domain name that resolves to an elastic load balancer,
      you’ll need to update the DNS record to point to the load balancer in the other region.
      You’ll also need to fail the database over to the other region so that the read replica can
      become the primary. Load balancers are not cross-region, so it’s not possible to point the
      load balancer in one region to instances in another. Restoring the database isn’t necessary
      because the primary database instance asynchronously replicates data to the read replicas
      in the other region.


Chapter 11: The Performance
EfficiencyPillar

  1. A, B, D. ECUs, vCPUs, and the Intel AES-NI encryption set are all instance type param-
    eters. Aggregate cumulative cost per request has nothing to do with EC2 instances but is
    a common key performance indicator (KPI). Read replicas are a feature used with data-
    base engines.

  2. A, B, C. A launch configuration pointing to an EC2 AMI and an associated load bal-
    ancer are all, normally, essential to an Auto Scaling operation. Passing a startup script to
    the instance at runtime may not be necessary, especially if your application is already set
    up as part of your AMI. OpsWorks stacks are orchestration automation tools and aren’t
    necessary for successful Auto Scaling.

    406 Appendix Answers to Review Questions


  3. B. Defining a capacity metric, minimum and maximum instances, and a load balancer
    are all done during Auto Scaling configuration. Only the AMI is defined by the launch
    configuration.

  4. A. Elastic Container Service is a good platform for microservices. Lambda functions execu-
    tions are short-lived (having a 15-minute maximum) and wouldn’t work well for this kind
    of deployment. Beanstalk operations aren’t ideal for microservices. ECR is a repository for
    container images and isn’t a deployment platform on its own.

  5. D. RAID optimization is an OS-level configuration and can, therefore, be performed only
    from within the OS.

  6. C. Cross-region replication can provide both low-latency and resilience. CloudFront and S3
    Transfer Acceleration deliver low latency but not resilience. RAID arrays can deliver both,
    but only on EBS volumes.

  7. A. S3 Transfer Acceleration makes use of CloudFront locations. Neither S3 Cross-Region
    Replication nor EC2 Auto Scaling uses CloudFront edge locations, and the EBS Data
    Transfer Wizard doesn’t exist (although perhaps it should).

  8. B.Scalability is managed automatically by RDS, and there is no way for you to improve it
    through user configurations. Indexes, schemas, and views should be optimized as much
    as possible.

  9. D, E. Automated patches, out-of-the-box Auto Scaling, and updates are benefits of a
    managed service like RDS, not of custom-built EC2-based databases.

  10. B, D. Integrated enhanced graphics and Auto Scaling can both help here. Amazon Light-
    sail is meant for providing quick and easy compute deployments. Elasticsearch isn’t likely
    to help with a graphics workload. CloudFront can help with media transfers, but not with
    graphics processing.

  11. C. The network load balancer is designed for any TCP-based application and preserves the
    source IP address. The application load balancer terminates HTTP and HTTPS connec-
    tions, and it’s designed for applications running in a VPC, but it doesn’t preserve the source
    IP address. The Classic load balancer works with any TCP-based application but doesn’t
    preserve the source IP address. There is no such thing as a Dynamic load balancer.

  12. A, B, D. The CloudFormation wizard, prebuilt templates, and JSON formatting are all
    useful for CloudFormation deployments. CloudDeploy and Systems Manager are not good
    sources for CloudFormation templates.

  13. A. There is no default node name in a CloudFormation configuration—nor is there a node
    of any sort.

  14. B, E. Chef and Puppet are both integrated with AWS OpsWorks. Terraform, SaltStack, and
    Ansible are not directly integrated with OpsWorks.

  15. A, C. Dashboards and SNS are important elements of resource monitoring. There are no
    tools named CloudWatch OneView or AWS Config dashboards.

    Chapter 12: The Security Pillar 407


  16. A, B. Advance permission from AWS is helpful only for penetration testing operations. A
    complete record of your account’s resource configuration changes would make sense in
    the context of AWS Config, but not CloudWatch. Service Catalog helps you audit your
    resources but doesn’t contribute to ongoing event monitoring.

  17. D. Config is an auditing tool. CloudTrail tracks API calls. CloudWatch monitors system
    performance. CodePipeline is a continuous integration/continuous deployment (CI/CD)
    orchestration service.

  18. B, C. ElastiCache executions can use either Redis or Memcached. Varnish and Nginx are
    both caching engines but are not integrated into ElastiCache.

  19. A, D. Redis is useful for operations that require persistent session states and or greater flex-
    ibility. If you’re after speed, Redis might not be the best choice; in many cases, Memcached
    will provide faster service. Redis configuration has a rather steep learning curve.

  20. B. Read replicas based on the Oracle database are not possible.


Chapter 12: The Security Pillar

  1. A, C. A password policy can specify a minimum password length but not a maximum. It
    can prevent a user from reusing a password they used before but not one that another user
    has used. A password policy can require a password to contain numbers. It can also require
    administrator approval to reset an expired password.

  2. B. The Condition element lets you require MFA to grant the permissions defined in the
    policy. The Resource and Action elements define what those permissions are but not the
    conditions under which those permissions are granted. The Principal element is not used in
    an identity-based policy.

  3. A, D. IAM keeps five versions of every customer managed policy. When CloudTrail is con-
    figured to log global management events, it will record any policy changes in the request
    parameters of the
    CreatePolicyVersion operation. There is no such thing as a policy
    snapshot. CloudTrail data event logs will not log IAM events.

  4. B. When an IAM user assumes a role, the user gains the permissions assigned to that role
    but loses the permissions assigned to the IAM user. The
    RunInstances action launches a
    new instance. Because the role can perform the
    RunInstances action in the us-east-1
    region, the user, upon assuming the role, can create a new instance in the us-east-1
    region but cannot perform any other actions. StartInstances starts an existing instance
    but doesn’t launch a new one.

  5. A. Granting a user access to use a KMS key to decrypt data requires adding the user to the
    key policy as a key user. Adding the user as a key administrator is insufficient to grant this
    access, as is granting the user access to the key using an IAM policy. Adding the user to a
    bucket policy can grant the user permission to access encrypted objects in the bucket but
    doesn’t necessarily give the user the ability to decrypt those objects.

    408 Appendix Answers to Review Questions


  6. C. VPC flow logs record source IP address information for traffic coming into your VPC.
    DNS query logs record the IP addresses of DNS queries, but those won’t necessarily be the
    same IP addresses accessing your application. Because users won’t directly connect to your
    RDS instance, RDS logs won’t record their IP addresses. CloudTrail logs can record the
    source IP address of API requests but not connections to an EC2 instance.

  7. C, D. Athena lets you perform advanced SQL queries against data stored in S3. A metric
    filter can increment based on the occurrence of a value in a CloudWatch log group but can’t
    tell you the most frequently occurring IP address.

  8. A. The Behavior finding type is triggered by an instance sending abnormally large amounts
    of data or communicating on a protocol and port that it typically doesn’t. The Backdoor
    finding type indicates that an instance has resolved a DNS name associated with a com-
    mand-and-control server or is communicating on TCP port 25. The Stealth finding type is
    triggered by weakening password policies or modifying a CloudTrail configuration. The
    ResourceConsumption finding type is triggered when an IAM user launches an EC2 in-
    stance when they’ve never done so.

  9. A, C. The AWS Config timeline will show every configuration change that occurred on
    the instance, including the attachment and detachment of security groups. CloudTrail
    management event logs will also show the actions that detached and attached the security
    group. Although AWS Config rules use Lambda functions, the Lambda logs for AWS
    managed rules are not available to you. VPC flow logs capture traffic ingressing

    a VPC, but not API events.

  10. D. The Security Best Practices rules package has rules that apply to only Linux instances.
    The other rules contain rules for both Windows and Linux instances.

  11. C, D. You can use an IAM policy or SQS access policy to restrict queue access to certain
    principals or those coming from a specified IP range. You cannot use network access con-
    trol lists or security groups to restrict access to a public endpoint.

  12. A, C. HTTPS traffic traverses TCP port 443, so the security group should allow inbound
    access to this protocol and port. HTTP traffic uses TCP port 80. Because users need to
    reach the ALB but not the instances directly, the security group should be attached to the
    ALB. Removing the Internet gateway would prevent users from reaching the ALB as well as
    the EC2 instances directly.

  13. B. A security group to restrict inbound access to authorized sources is sufficient to guard
    against a UDP-based DDoS attack. Elastic load balancers do not provide UDP listeners,
    only TCP. AWS Shield is enabled by default and protects against those UDP-based attacks
    from sources that are allowed by the security group.

  14. A, C. WAF can block SQL injection attacks against your application, but only if it’s behind
    an application load balancer. It’s not necessary for the EC2 instances to have an elastic IP
    address. Blocking access to TCP port 3306, which is the port that MySQL listens on for
    database connections, may prevent direct access to the database server but won’t prevent a
    SQL injection attack.

    Chapter 13: The Cost Optimization Pillar 409


  15. B, D. Both WAF and Shield Advanced can protect against HTTP flood attacks, which are
    marked by excessive or malformed requests. Shield Advanced includes WAF at no charge.
    Shield Standard does not offer protection against Layer 7 attacks. GuardDuty looks for
    signs of an attack but does not prevent one.

  16. A, D. You can revoke and rotate both a customer-managed CMK and a customer-provided
    key at will. You can’t revoke or rotate an AWS-managed CMK or an S3-managed key.

  17. C, D. Customer-managed customer master keys (CMKs) can be rotated at will, whereas
    AWS-managed CMKs are rotated only once a year. RDS and DynamoDB let you use a
    customer-managed CMK to encrypt data. RedShift is not designed for highly transactional
    databases and is not appropriate for the application. KMS stores and manages encryption
    keys but doesn’t store application data.

  18. B, D. To encrypt data on an unencrypted EBS volume, you must first take a snapshot. The
    snapshot will inherit the encryption characteristics of the source volume, so an unencrypted
    EBS volume will always yield an unencrypted snapshot. You can then simultaneously
    encrypt the snapshot as you copy it to another region.

  19. B. You can enable encryption on an EFS filesystem only when you create it; therefore, the
    only option to encrypt the data using KMS is to create a new EFS filesystem and copy
    the data to it. A third-party encryption program can’t use KMS keys to encrypt data.

    Encrypting the EBS volume will encrypt the data stored on the volume, but not on the EFS
    filesystem.

  20. A, D. You can install an ACM-generated certificate on a CloudFront distribution or appli-
    cation load balancer. You can’t export the private key of an ACM-generated certificate, so
    you can’t install it on an EC2 instance. AWS manages the TLS certificates used by S3.

  21. C. Security Hub checks the configuration of your AWS services against AWS best practices.


Chapter 13: The Cost Optimization Pillar

  1. C. The Free Tier provides free access to basic levels of AWS services for a new account’s
    first
    year.

  2. A. Standard provides the most replicated and quickest-access service and is, therefore, the
    most expensive option. Storage rates for Standard-Infrequent and One Zone-Infrequent are
    lower than Standard but are still more expensive than Glacier.

  3. B. Cost Explorer provides usage and spending data. Organizations lets you combine mul-
    tiple AWS accounts under a single administration. TCO Calculator lets you compare the
    costs of running an application on AWS versus locally.

  4. D. Cost Explorer provides usage and spending data, but without the ability to easily incor-
    porate Redshift and QuickSight that Cost and Usage Reports offers. Trusted Advisor
    checks your account for best-practice compliance. Budgets allows you to set alerts for prob-
    lematic usage.

    410 Appendix Answers to Review Questions


  5. A, B, D. As efficient as Organizations can be, so does the threat they represent grow. There
    is no such thing as a specially hardened organization-level VPC. Security groups don’t
    require any special configuration.

  6. B, C. Trusted Advisor monitors your EC2 instances for lower than 10 percent CPU and
    network I/O below 5 MB on four or more days. Trusted Advisor doesn’t monitor Route 53
    hosted zones or the status of S3 data transfers. Proper OS-level configuration of your EC2
    instances is your responsibility.

  7. B. The Pricing Calculator is the most direct tool for this kind of calculation. TCO
    Calculator helps you compare costs of on-premises to AWS deployments. Trusted Advisor
    checks your account for best-practice compliance. Cost and Usage Reports helps you
    analyze data from an existing deployment.

  8. A. Monitoring of EBS volumes for capacity is not within the scope of budgets.

  9. A, B. Tags can take up to 24 hours to appear and they can’t be applied to legacy resources.
    You’re actually allowed only two free budgets per account. Cost allocation tags are
    managed from the Cost Allocation Tags page.

  10. D. The most effective approach would be to run three reserve instances 12 months/year and
    purchase three scheduled reserve instances for the summer. Spot instances are not appro-
    priate because they shut down automatically. Since it’s possible to schedule an RI to launch
    within a recurring block of time, provisioning other instance configurations for the summer
    months will be wasteful.

  11. C. Interruption polices are relevant to spot instances, not reserved instances. Payment
    options (All Upfront, Partial Upfront, or No Upfront), reservation types (Standard or Con-
    vertible RI), and tenancy (Default or Dedicated) are all necessary settings for RIs.

  12. C. No Upfront is the most expensive option. The more you pay up front, the lower the
    overall cost. There’s no option called Monthly.

  13. B, D. Containers are more dense and lightweight. Containers do tend to launch more
    quickly than EC2 instances and do make it easy to replicate server environments, but those
    are not primarily cost savings.

  14. B. Standard reserve instances make the most sense when they need to be available 24/7 for
    at least a full year, with even greater savings over three years. Irregular or partial workdays
    are not good candidates for this pricing model.

  15. D. A spot instance pool is made up of unused EC2 instances. There are three request types:
    Request, Request And Maintain, and Reserve For Duration. A spot instance interrup-

    tion occurs when the spot price rises above your maximum. A spot fleet is a group of spot
    instances launched together.

  16. A. A spot instance interruption occurs when the spot price rises above your maximum.
    Workload completions and data center outages are never referred to as interruptions. Spot
    requests can’t be manually restarted.

    Chapter 14: The Operational Excellence Pillar 411


  17. B. Target capacity represents the maximum instances you want running. A spot instance
    pool contains unused EC2 instances matching a particular set of launch specifications. Spot
    maximum and spot cap sound good but aren’t terms normally used in this context.

  18. A. The EBS Lifecycle Manager can be configured to remove older EBS snapshots according
    to your needs. Creating a script is possible, but it’s nowhere near as simple and it’s not
    tightly integrated with your AWS infrastructure. There is no “EBS Scheduled Reserve In-
    stance” but there is an “EC2 Scheduled Reserve Instance.” Tying a string? Really? EBS
    snapshots are stored in S3, but you can’t access the buckets that they’re kept in.

  19. D. The command is request-spot-fleet. The --spot-fleet-request-config

    argument points to a JSON configuration file.

  20. C. The availability zone, target capacity, and AMI are all elements of a complete spot
    fleet request.


Chapter 14: The Operational
Excellence Pillar

  1. C, D. It’s a best practice to organize stacks by lifecycle (e.g., development, test, production)
    and ownership (e.g., network team, development team). You can store templates for mul-
    tiple stacks in the same bucket, and there’s no need to separate templates for different stacks
    into different buckets. Organizing stacks by resource cost doesn’t offer any advantage since
    the cost is the same regardless of which stack a resource is in.

  2. A, B. Parameters let you input custom values into a template when you create a stack. The
    purpose of parameters is to avoid hard-coding those values into a template. An AMI ID
    and EC2 key pair name are values that likely would not be hard-coded into a template.
    Although you define the stack name when you create a stack, it is not a parameter that you
    define in a template. The logical ID of a resource must be hard-coded in the template.

  3. C. When using nested stacks, the parent stack defines a resource of the type
    A
    WS::CloudFormation::Stack, which points to the template used to generate the nested
    stack. Because of this, there’s no need to define a VPC resource directly in the template
    that creates the parent stack. There is also no need to export stack output values because
    the nested stacks do not need to pass any information to stacks outside of the nested stack
    hierarchy. For this same reason, you don’t need to use the
    Fn::ImportValue intrinsic
    function, since it is used to import values exported by another stack.

  4. A. A change set lets you see the changes CloudFormation will make before updating the
    stack. A direct update doesn’t show you the changes before making them. There’s no need
    to update or override the stack policy before using a change set to view the changes that
    CloudFormation would make.

  5. C. To use Git to access a repository as an IAM user, the developer must use a Git username
    and password generated by IAM. Neither an AWS access key and secret key combination
    nor an IAM username and password will work. Although SSH is an option, the developer
    would need a private key. The public key is what you’d provide to IAM.

    412 Appendix Answers to Review Questions


  6. D. You can allow repository access for a specific IAM user by using an IAM policy that
    specifies the repository ARN as the resource. Specifying the repository’s clone URL would
    not work, since the resource must be an ARN. Generating Git credentials also would not
    work, because the user still needs permissions via IAM. There is no such thing as a reposi-
    tory policy.

  7. A. CodeCommit offers differencing, allowing you (and the auditors) to see file-level changes
    over time. CodeCommit offers at-rest encryption using AWS-managed KMS keys but not
    customer-managed keys. S3 offers versioning and at-rest encryption, but not differencing.

  8. B. The git clone command clones or downloads a repository. The git push command
    pushes or uploads changes to a repository. The
    git add command stages files for commit
    to a local repository but doesn’t commit them or upload them to CodeCommit. The

    aws codecommit get-repository command lists the metadata of a repository, such as
    the clone URL and ARN, but doesn’t download the files in it.

  9. D. CodeDeploy can deploy from an S3 bucket or GitHub repository. It can’t deploy from
    any other Git repository or an EBS snapshot.

  10. B. A blue/green instance deployment requires an elastic load balancer (ELB) in order to
    direct traffic to the replacement instances. An in-place instance deployment can use an ELB
    but doesn’t require it. A blue/green Lambda deployment doesn’t use an ELB because ELB is
    for routing traffic to instances. There’s no such thing as an in-place Lambda deployment.

  11. C. The AllAtOnce deployment configuration considers the entire deployment to have suc-
    ceeded if the application is deployed successfully to at least one instance. HalfAtATime and
    OneAtATime require the deployment to succeed on multiple instances. There’s no precon-
    figured deployment configuration called OnlyOne.

  12. B. The AfterAllowTraffic lifecycle event occurs last in any instance deployment that
    uses an elastic load balancer.
    ValidateService and BeforeAllowTraffic occur before
    CodeDeploy allowing traffic to the instances.
    AllowTraffic is a lifecycle event, but you
    can’t hook into it to run a script.

  13. A. CodePipeline stores pipeline artifacts in an S3 bucket. An artifact can serve as an input
    to a stage, an output from a stage, or both. A provider is a service that performs an action,
    such as building or testing. An asset is a term that often refers to the supporting files for an
    application, such as images or audio. S3 doesn’t offer snapshots, but it does offer versioning
    for objects.

  14. B, C. You can implement an approval action to require manual approval before transition-
    ing to the deploy stage. Instead of or in addition to this, you can disable the transition to
    the deploy stage, which would require manually enabling the transition to deploy to pro-
    duction. Because CodePipeline uses one bucket for all stages of the pipeline, you can’t cre-
    ate a separate bucket for the deploy stage. Even if you could, disallowing developers access
    to that bucket would not prevent a deployment, since CodePipeline obtains its permission
    to the bucket by virtue of its IAM service role.

    Chapter 14: The Operational Excellence Pillar 413


  15. A, D, E. A pipeline must consist of at least two stages. The first stage must contain only
    source actions. Since the templates are stored in CodeCommit, it must be the provider for
    the source action. The second stage of the pipeline should contain a deploy action with a
    CloudFormation provider, since it’s the service that creates the stack. There’s no need for a
    build stage, because CloudFormation templates are declarative code that don’t need to be
    compiled. Hence, the pipeline should only be two stages. CodeCommit is not a valid pro-
    vider for the deploy action.

  16. B. A pipeline can have anywhere from two to 10 stages. Each stage can have one to
    20 actions.

  17. B. Automation documents let you perform actions against your AWS resources, including
    taking EBS snapshots. Although they’re called automation documents, you can still
    manually execute them. A command document performs actions within a Linux or Win-
    dows instance. A policy document works only with State Manager and can’t take an EBS
    snapshot. There’s no manual document type.

  18. A. The AmazonEC2RoleforSSM managed policy contains permissions allowing the
    Systems Manager agent to interact with the Systems Manager service. There’s no need to
    install the agent because Amazon Linux comes with it preinstalled. There’s also no need to
    open inbound ports to use Systems Manager.

  19. A, D. Setting the patch baseline’s auto-approval delay to 0 and then running the

    AWS-RunPatchBaseline document would immediately install all available security
    patches. Adding the patch to the list of approved patches would approve the specific patch
    for installation but not any other security updates released within the preceding seven days.
    Changing the maintenance window to occur Monday at midnight wouldn’t install the patch
    until the following Monday.

  20. A, B. Creating a global inventory association will immediately run the

AWS-GatherSoftwareInventory policy document against the instance, collecting both
network configuration and software inventory information. State Manager will execute the
document against future instances according to the schedule you define. Simply running the
AWS-GatherSoftwareInventory policy document won’t automatically gather configura-
tion information for future instances. Of course, an instance must be running in order for the
Systems Manager agent to collect data from it. The
AWS-SetupManagedInstance docu-
ment is an automation document and thus can perform operations on AWS resources and not
tasks within an instance.


Assessment Test

  1. You have an application running on Amazon Elastic Compute Cloud (Amazon EC2) that
    needs read-only access to several AWS services. What is the best way to grant that applica-
    tion permissions only to a specific set of resources within your account?

    1. Use API credentials derived based on the AWS account.

    2. Launch the EC2 instance into an AWS Identity and Access Management (IAM) role
      and attach the
      ReadOnlyAccess IAM-managed policy.

    3. Declare the necessary permissions as statements in the AWS SDK configuration file on
      the EC2 instance.

    4. Launch the EC2 instance into an IAM role with custom IAM policies for the permissions.


  2. You have deployed a new application in the US West (Oregon) Region. However, you have
    accidentally deployed an Amazon Polly lexicon needed for your application in EU (London).
    How can you use your lexicon to synthesize speech while minimizing the changes to your
    application code and reducing cost?

    1. Point your SDK client to the EU (London) for all requests to Amazon Polly, but to US
      West (Oregon) for all other API calls.

    2. No action needed; the data is automatically available from all Regions.

    3. Upload a copy of the lexicon to US West (Oregon).

    4. Move the rest of the application resources to EU (London).


  3. When you’re placing subnets for a specific Amazon Virtual Private Cloud (Amazon VPC),
    you can place the subnets in which of the following?

    1. In any Availability Zone within the Region for the Amazon VPC

    2. In any Availability Zone in any Region

    3. In any AWS edge location

    4. In any specific AWS data center


  4. You have identified two Amazon Elastic Compute Cloud (Amazon EC2) instances in your
    account that appear to have the same private IP address. What could be the cause?

    1. These instances are in different Amazon Virtual Private Cloud (Amazon VPCs).

    2. The instances are in different subnets.

    3. The instances have different network ACLs.

    4. The instances have different security groups.


  5. You have a workload that requires 15,000 consistent IOPS for data that must be durable.
    What combination of the following do you need? (Select TWO.)

    1. Use an Amazon Elastic Block Store (Amazon EBS) optimized instance.

    2. Use an instance store.

    3. Use a Provisioned IOPS SSD volume.

    4. Use a previous-generation EBS volume.

      xxxvi Assessment Test


  6. Your company stores critical documents in Amazon Simple Storage Service (Amazon S3),
    but it wants to minimize cost. Most documents are used actively for only about one month
    and then used much less frequently after that. However, all data needs to be available
    within minutes when requested. How can you meet these requirements?

    1. Migrate the data to Amazon S3 Reduced Redundancy Storage (RRS) after 30 days.

    2. Migrate the data to Amazon S3 Glacier after 30 days.

    3. Migrate the data to Amazon S3 Standard – Infrequent Access (IA) after 30 days.

    4. Turn on versioning and then migrate the older version to Amazon S3 Glacier.


  7. You are migrating your company’s applications and data from on-premises to the AWS
    Cloud. You have performed a data inventory and discovered that you will need to transfer
    about 2 PB of data to AWS. Which migration option will be the best choice for your com-
    pany with minimal cost and shortest time?

    1. AWS Snowball

    2. AWS Snowmobile

    3. Upload files directly to AWS over the internet using Amazon Simple Storage Service
      (Amazon S3) Transfer Acceleration.

    4. Amazon Kinesis Data Firehose


  8. You are changing your application to take advantage of the elasticity and cost benefits pro-
    vided by AWS Auto Scaling. To do this, you must move session state information from the
    individual Amazon Elastic Compute Cloud (Amazon EC2) instances. Which of the follow-
    ing AWS Cloud services is best suited as an alternative for storing session state information?

    1. Amazon DynamoDB

    2. Amazon Redshift

    3. AWS Storage Gateway

    4. Amazon Kinesis


  9. Your company’s senior management wants to query several data stores to obtain a “big pic-
    ture” view of the business. The amount of data contained within the data stores is at least
    2 TB in size. Which of the following is the best AWS service to deliver results to senior
    management?

    1. Amazon Elastic Block Store (Amazon EBS)

    2. Amazon Simple Storage Service (Amazon S3)

    3. Amazon Relational Database Service (Amazon RDS)

    4. Amazon Redshift


  10. Your ecommerce application provides daily and ad hoc reporting to various business
    units on customer purchases. These operations result in a high level of read traffic to your

    MySQL Amazon Relational Database Service (Amazon RDS) instance. What can you do to
    scale up read traffic without impacting your database’s performance?

    1. Increase the allocated storage for the Amazon RDS instance.

    2. Modify the Amazon RDS instance to be a Multi-AZ deployment.

      Assessment Test xxxvii


    3. Create a read replica for an Amazon RDS instance.

    4. Change the Amazon RDS instance DB engine version.


  11. Your company has refactored their application to use NoSQL instead of SQL. They would
    like to use a managed service for running the new NoSQL database. Which AWS service
    should you recommend?

    1. Amazon Relational Database Service (Amazon RDS)

    2. Amazon Elastic Compute Cloud (Amazon EC2)

    3. Amazon DynamoDB

    4. Amazon Redshift


  12. A company is currently using Amazon Relational Database Service (Amazon RDS);
    however, they are retiring a database that is currently running. They have automatic back-
    ups enabled on the database. They want to make sure that they retain the last backup
    before deleting the Amazon RDS database. As the lead developer on the project, what
    should you do?

    1. Delete the database. Amazon RDS automatic backups are already enabled.

    2. Create a manual snapshot before deleting the database.

    3. Use the AWS Database Migration Service (AWS DMS) to back up the database.

    4. SSH into the Amazon RDS database and perform a SQL dump.


  13. When using Amazon Redshift, which node do you use to run your SQL queries?

    1. Compute node

    2. Cluster node

    3. Master node

    4. Leader node


  14. Your company is building a recommendation feature for their application. They would like
    to use an AWS managed graph database. Which service should you recommend?

    1. Amazon Relational Database Service (Amazon RDS)

    2. Amazon Neptune

    3. Amazon ElastiCache

    4. Amazon Redshift


  15. You have an Amazon DynamoDB table that has a partition key and a sort key. However, a
    business analyst on your team wants to be able to query the DynamoDB table with a differ-
    ent partition key. What should you do?

    1. Create a local secondary index.

    2. Create a global secondary index.

    3. Create a new DynamoDB table.

    4. Advise the business analyst that this is not possible.

      xxxviii Assessment Test


  16. An application is using Amazon DynamoDB. Recently, a developer on your team has
    noticed that occasionally the application does not return the most up-to-date data after a
    read from the database. How can you solve this issue?

    1. Increase the number of read capacity units (RCUs) for the table.

    2. Increase the number of write capacity units (WCUs) for the table.

    3. Refactor the application to use a SQL database.

    4. Configure the application to perform a strongly consistent read.


  17. A developer on your team would like to test a new idea and requires a NoSQL database.
    Your current applications are using Amazon DynamoDB. What should you recommend?

    1. Create a new table inside DynamoDB.

    2. Use DynamoDB Local.

    3. Use another NoSQL database on-premises.

    4. Create an Amazon Elastic Compute Cloud (Amazon EC2) instance, and install a
      NoSQL database.

  18. The AWS Encryption SDK provides an encryption library that integrates with AWS Key
    Management Service (AWS KMS) as a master key provider. Which of the following opera-
    tions does the AWS Encryption SDK perform to build on the AWS SDKs?

    1. Generates, encrypts, and decrypts data keys

    2. Uses the data keys to encrypt and decrypt your raw data

    3. Stores the encrypted data keys with the corresponding encrypted data in a single
      object

    4. All of the above


  19. Of all the cryptographic algorithms that the AWS Encryption SDK supports, which one is
    the default algorithm?

    1. AES-256

    2. AES-192

    3. AES-128

    4. SSH-256


  20. Amazon Elastic Block Store (Amazon EBS) volumes are encrypted by default.

    1. True

    2. False


  21. Which of the following cannot be retained when deleting an AWS Elastic Beanstalk
    environment?

    1. Source code from the Git repository

    2. Data from the automatic backups of an Amazon Relational Database Service (Amazon
      RDS) instance

    3. Packaged code from the source bundle stored in an Amazon Simple Storage Service
      (Amazon S3) bucket

    4. Data from the snapshot of an Amazon RDS instance

      Assessment Test xxxix


  22. Which of the following is not part of the AWS Elastic Beanstalk functionality?

    1. Notify the account user of language runtime platform changes

    2. Display events per environment

    3. Show instance statuses per environment

    4. Perform automatic changes to AWS Identity and Access Management (IAM) policies


  23. What happens to AWS CodePipeline revisions that, upon reaching a manual approval gate,
    are rejected?

    1. The pipeline continues.

    2. A notification is sent to the account administrator.

    3. The revision is treated as failed.

    4. The pipeline creates a revision clone and continues.


  24. Which of the following is an invalid strategy for migrating data to AWS CodeCommit?

    1. Incrementally committing files from a large repository

    2. Syncing the files from Amazon Simple Storage Service (Amazon S3) using the sync

      AWS CLI command

    3. Cloning an existing repository, updating the remote, and pushing

    4. Manually creating files in the AWS Management Console


  25. You have an AWS CodeBuild task in your pipeline that requires large binary files that do
    not frequently change. What would be the best way to include these files in your build?

    1. Store the files in your source code repository. They will be passed in as part of the
      revision.

    2. Store the files in an Amazon Simple Storage Service (Amazon S3) bucket and copy
      them during the build.

    3. Create a custom build container that includes the files.

    4. It is not possible to include files above a certain size.


  26. When you update an AWS::S3::Bucket resource, what is the expected behavior if the Name

    property is updated?

    1. The resource is updated with no interruption.

    2. The resource is updated with some interruption.

    3. The resource is replaced.

    4. The resource is deleted.


  27. What is the preferred method for updating resources created by AWS CloudFormation?

    1. Updating the resource directly in the AWS Management Console

    2. Submitting an updated template to AWS CloudFormation to modify the stack

    3. Updating the resource using the AWS Command Line Interface (AWS CLI)

    4. Updating the resource using an AWS Software Development Kit (AWS SDK)

      xl Assessment Test


  28. When does the AWS OpsWorks Stacks configure lifecycle event run?

    1. On individual instances immediately when they are first created

    2. On individual instances after a deploy lifecycle event

    3. On all instances in a stack when a single instance comes online or goes offline

    4. On all instances in a stack after a deploy lifecycle event


  29. Which non-Amazon Elastic Compute Cloud (Amazon EC2) AWS resources can AWS
    OpsWorks Stacks manage? (Select THREE.)

    1. Elastic IP addresses

    2. Amazon Elastic Block Store (Amazon EBS) volumes

    3. Amazon Relational Database Service (Amazon RDS) database instances

    4. Amazon ElastiCache clusters

    5. Amazon Redshift data warehouses


  30. Which AWS Cloud service can Simple Active Directory (Simple AD) use to authenticate
    users?

    1. Amazon WorkDocs

    2. Amazon Cognito

    3. Amazon Elastic Compute Cloud (Amazon EC2)

    4. Amazon Simple Storage Service (Amazon S3)


  31. What is the best application of Amazon Cognito?

    1. Use instead of Active Directory for AWS Identity and Access Management (IAM) users.

    2. Provide authentication to third-party web applications.

    3. Use as an Amazon Aurora database.

    4. Use to access objects in an Amazon Simple Storage Service (Amazon S3) bucket.


  32. You manage a sales tracking system in which point-of-sale devices send transactions of this
    form:

    {"date":"2017-01-30", "amount":100.20, "product_id": "1012", "region":
    "WA", "customer_id": "3382"}

    You need to generate two real-time reports. The first reports on the total sales per day for
    each customer. The second reports on the total sales per day for each product. Which AWS
    offerings and services can you use to generate these real-time reports?

    1. Ingest the data through Amazon Kinesis Data Streams. Use Amazon Kinesis Data Analyt-
      ics to query for sales per day for each product and sales per day for each customer using
      SQL queries. Feed the result into two new streams in Amazon Kinesis Data Firehose.

    2. Ingest the data through Kinesis Data Streams. Use Kinesis Data Firehose to query for
      sales per day for each product and sales per day for each customer with SQL queries.
      Feed the result into two new streams in Kinesis Data Firehose.

      Assessment Test xli


    3. Ingest the data through Kinesis Data Analytics. Use Kinesis Data Streams to query for
      sales per day for each product and sales per day for each customer with SQL queries. Feed
      the result into two new streams in Kinesis Data Firehose.

    4. Ingest the data in Amazon Simple Queue Service (Amazon SQS). Use Kinesis Data
      Firehose to query for sales per day for each product and sales per day for each
      customer with SQL queries. Feed the result into two new streams in Kinesis Data
      Firehose.

  33. You design an application for selling toys online. Every time a customer orders a toy, you
    want to add an item into the
    orders table in Amazon DynamoDB and send an email to the
    customer acknowledging their order. The solution should be performant and cost-effective.
    How can you trigger this email?

    1. Use an Amazon Simple Queue Service (Amazon SQS) queue.

    2. Schedule an AWS Lambda function to check for changes to the orders table every
      minute.

    3. Schedule an Lambda function to check for changes to the orders table every second.

    4. Use Amazon DynamoDB Streams.


  34. A company would like to use Amazon DynamoDB. They want to set up a NoSQL-style
    trigger. Is this something that can be accomplished? If so, how?

    1. No. This cannot be done with DynamoDB and NoSQL.

    2. Yes, but not with AWS Lambda.

    3. No. DynamoDB is not a supported event source for Lambda.

    4. Yes. You can use Amazon DynamoDB Streams and poll them with Lambda.


  35. A company wants to access the infrastructure on which AWS Lambda runs. Is this possible?

    1. No. Lambda is a managed service and runs the necessary infrastructure on your
      behalf.

    2. Yes. They can access the infrastructure and make changes to the underlying OS.

    3. Yes. They need to open a support ticket.

    4. Yes, but they need to contact their Solutions Architect to provide access to the environ-
      ment.

  36. Using the smallest amount of memory possible for an AWS Lambda function, currently
    128 MB, will result in the lowest bill.

    1. True. Lambda bills based on the total memory allocated.

    2. False. Lambda has a flat rate—memory allocation is not important for billing, only
      performance.

    3. False. Lambda bills based on memory plus the number of times that you trigger the
      function.

    4. False. Lambda bills based on memory, the amount of compute time spent on a function
      in 100-ms increments, and the number of times that you execute or trigger a function.

      xlii Assessment Test


  37. Which Amazon services can you use for caching? (Select TWO.)

    1. AWS CloudFormation

    2. Amazon Simple Storage Service (Amazon S3)

    3. Amazon CloudFront

    4. Amazon ElastiCache


  38. Which Amazon API Gateway feature enables you to create a separate path that can be help-
    ful in creating a development endpoint and a production endpoint?

    1. Authorizers

    2. API keys

    3. Stages

    4. Cross-origin resource sharing (CORS)


  39. Which of the following methods does Amazon API Gateway support?

    1. GET

    2. POST

    3. OPTIONS

    4. All of the above


  40. Which authorization mechanisms does Amazon API Gateway support?

    1. AWS Identity and Access Management (IAM) policies

    2. AWS Lambda custom authorizers

    3. Amazon Cognito user pools

    4. All of the above


  41. Which tool can you use to develop and test AWS Lambda functions locally?

    1. AWS Serverless Application Model (AWS SAM)

    2. AWS SAM CLI

    3. AWS CloudFormation

    4. None of the above


  42. Which serverless AWS service can you use to store user session state?

    1. Amazon Elastic Compute Cloud (Amazon EC2)

    2. Amazon ElastiCache

    3. AWS Elastic Beanstalk

    4. Amazon DynamoDB


  43. Which AWS service can you use to store user profile information?

    1. Amazon CloudFront

    2. Amazon Cognito

    3. Amazon Kinesis

    4. AWS Lambda

      Assessment Test xliii


  44. Which of the following objects are good candidates to store in a cache? (Select THREE.)

    1. Session state

    2. Shopping cart

    3. Product catalog

    4. Bank account balance


  45. Which of the following cache engines does Amazon ElastiCache support? (Select TWO.)

    1. Redis

    2. MySQL

    3. Couchbase

    4. Memcached


  46. How can you aggregate Amazon CloudWatch metrics across Regions?

    1. CloudWatch does not aggregate data across Regions.

    2. This is enabled by default.

    3. Send the metric data from other Regions to Amazon Simple Storage Service (Amazon S3)
      for retrieval by CloudWatch.

    4. Stream the metric data to Amazon Kinesis, and retrieve it using an AWS Lambda
      function.

  47. Why would an Amazon CloudWatch alarm report as INSUFFICIENT_DATA instead of OK or

    ALARM? (Select THREE.)

    1. The alarm was just created.

    2. The metric is not available.

    3. There is an AWS Identity and Access Management (IAM) permission preventing the
      metric from receiving data.

    4. Not enough data is available for the metric to determine the alarm state.

    5. The alarm period is missing.


  48. You were asked to develop an administrative web application that consumes low through-
    put and rarely receives high traffic. Which of the following instance type families will be
    the most optimized choice?

    1. Memory optimized

    2. Compute optimized

    3. General purpose

    4. Accelerated computing


  49. Which of the following AWS Cost Management Tools can you use to view your costs and
    find ways to take advantage of elasticity?

    1. AWS Cost Explorer

    2. AWS Trusted Advisor

    3. Amazon CloudWatch

    4. Amazon EC2 Auto Scaling

      xliv Assessment Test


  50. Because cloud resources are easier to deploy and they incur usage-based costs, your organi-
    zation is setting up good governance rules to manage costs. They are currently focusing on
    controlling and restricting Amazon Elastic Compute Cloud (Amazon EC2) instance deploy-
    ments. Which of the following is an effective recommendation?

    1. Seek approval from Cost Engineering teams before deploying any EC2 instances.

    2. Use AWS Identity and Access Management (IAM) policies to enable engineers to
      deploy EC2 instances only when specific mandatory tags are used.

    3. Review Amazon CloudWatch metrics to optimize the resource utilization.

    4. Use AWS Cost Explorer usage and forecasting reports.


  51. Because your applications are showing a consistent steady-state compute usage, you have
    decided to purchase Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instances to
    gain significant pricing discounts. Which of the following is
    not the best purchase option?

    1. All Upfront

    2. Partial Upfront

    3. No Upfront

    4. Pay-as-you-go


  52. Your application processes transaction-heavy and IOPS-intensive database workloads. You
    need to choose the right Amazon Elastic Block Store (Amazon EBS) volume so that applica-
    tion performance is not affected. Which of the following options would you suggest?

    1. HDD-backed storage (st1)

    2. SSD-backed storage (io1)

    3. Amazon Simple Storage Service (Amazon S3) Intelligent Tier class storage

    4. Cold HDD-backed storage (sc1)


  53. A legacy financial institution is planning for a huge technical upgrade and planning to go
    global. The architecture depends heavily on using caching solutions. Which one of the fol-
    lowing services does
    not fit into the caching solutions?

    1. Amazon ElastiCache for Redis

    2. Amazon ElastiCache for Memcached

    3. Amazon DynamoDB Accelerator

    4. Amazon Elastic Compute Cloud (Amazon EC2) memory-optimized


  54. Which of the following characteristics separates Amazon DynamoDB from the Amazon
    Relational Database Service (Amazon RDS) design?

    1. Incurs the performance costs of an ACID-compliant transaction system

    2. Normalizes data and stores it on multiple tables

    3. Keeps related data together

    4. May require expensive joins

      Assessment Test xlv


  55. Which of the following partition key choices is an inefficient design that leads to poor dis-
    tribution of the data in an Amazon DynamoDB table?

    1. User ID, where the application has many users

    2. Device ID, where each device accesses data at relatively similar intervals

    3. Status code, where there are only a few possible status codes

    4. Session ID, where the user session remains distinct


  56. You are planning to build serverless backends by using AWS Lambda to handle web,
    mobile, Internet of Things (IoT), and third-party API requests. Which of the following are
    the main benefits in opting for a serverless architecture in this scenario? (Select THREE.)

    1. No need to manage servers

    2. No need to ensure application fault tolerance and fleet management

    3. No charge for idle capacity

    4. Flexible maintenance schedules

    5. Powered for high complex processing


  57. Your enterprise infrastructure has recently migrated to the AWS Cloud. You are now trying
    to optimize the storage solutions. Which of the following are the appropriate storage man-
    agement tools that you can use to review and analyze the storage classes and access patterns
    usage to help reduce costs? (Select TWO.)

    1. Amazon Simple Storage Service (Amazon S3) analytics

    2. Cost allocation Amazon S3 bucket tags

    3. Amazon S3 Transfer Acceleration

    4. Amazon Route 53

    5. AWS Budgets


Answers to Assessment Test

  1. D. Use the custom IAM policy to configure the permissions to a specific set of resources in
    your account. The
    ReadOnlyAccess IAM policy restricts write access but grants access to
    all resources within your account. AWS account credentials are unrestricted. Policies do not
    go in an SDK configuration file. They are enforced by AWS on the backend.


  2. C. This is the simplest approach because only a single resource is in the wrong Region.
    Option A is a possible approach, but it is not the simplest approach because it introduces
    cross-region calls that may increase latency and cross-region data transfer pricing.

  3. A. Each Amazon VPC is placed in a specific Region and can span all the Availability Zones
    within that Region. Option B is incorrect because a subnet must be placed within the
    Region for the selected VPC. Option C is incorrect because edge locations are not available
    for subnets, and option D is incorrect because you cannot choose specific data centers.

  4. A. Even though each instance in an Amazon VPC has a unique private IP address, you
    could assign the same private IP address ranges to multiple Amazon VPCs. Therefore, two
    instances in two different Amazon VPCs in your account could end up with the same pri-
    vate IP address. Options B, C, and D are incorrect because within the same Amazon VPC,
    there is no duplication of private IP addresses.

  5. A, C. Amazon EBS optimized instances reserve network bandwidth on the instance for
    I/O, and Provisioned IOPS SSD volumes provide the highest consistent IOPS. Option B is
    incorrect because instance store is not durable. Option D is incorrect because a previous-
    generation EBS volume offers an average of 100 IOPS.

  6. C. Migrating the data to Amazon S3 Standard-IA after 30 days using a lifecycle policy is
    correct. The lifecycle policy will automatically change the storage class for objects aged
    over 30 days. The Standard-IA storage class is for data that is accessed less frequently, but
    still requires rapid access when needed. It offers the same high durability, high through-
    put, and low latency of Standard, with a lower per gigabyte storage price and per gigabyte
    retrieval fee. Option A is incorrect because RRS provides a lower level of redundancy. The
    question did not state that the customer is willing to reduce the redundancy level of the
    data, and RRS does not replicate objects as many times as standard Amazon S3 storage.
    This storage option enables customers to store noncritical, reproducible data. Option B is
    incorrect because the fastest retrieval option for Amazon S3 Glacier is typically 3–5 hours.
    The customer requires retrieval in minutes. Option D is incorrect. Versioning will increase
    the number of files if new versions of files are being uploaded, which will increase cost. The
    question did not mention a need for multiple versions of files.


  7. A. Option B is incorrect. You could use Snowmobile, but that would not be as cost effective
    because it is meant to be used for datasets of 10 PB or more. Option C is incorrect because
    uploading files directly over the internet to Amazon S3, even using Amazon S3 Transfer
    Accelerator, would take many months and would be using your on-premises bandwidth.
    Option D is incorrect because Amazon Kinesis Data Firehose would still be transferring
    over the internet and take months to complete while using your on-premises bandwidth.

    Answers to Assessment Test xlvii


  8. A. DynamoDB is a NoSQL database store that is a good alternative because of its scal-
    ability, high availability, and durability characteristics. Many platforms provide open
    source, drop-in replacement libraries that enable you to store native sessions in DynamoDB.
    DynamoDB is a suitable candidate for a session storage solution in a share-nothing,
    distributed architecture.

  9. D. Amazon Redshift is the best choice for data warehouse workloads that typically span
    multiple data repositories and are at least 2 TB in size.

  10. C. Amazon RDS read replicas provide enhanced performance and durability for Amazon
    RDS instances. This replication feature makes it easy to scale out elastically beyond the
    capacity constraints of a single Amazon RDS instance for read-heavy database workloads.

    You can create one or more replicas of a given source Amazon RDS instance and serve

    high-volume application read traffic from multiple copies of your data, increasing aggregate
    read throughput.

  11. C. DynamoDB is the best option. The question states a managed service, so this eliminates
    the Amazon EC2 service. Additionally, Amazon RDS and Amazon Redshift are SQL data-
    base products. The company is looking for a NoSQL product. DynamoDB is a managed
    NoSQL service.

  12. B. Automatic backups do not retain the backup after the database is deleted. Therefore,
    option A is incorrect. Option C is incorrect. The AWS Database Migration Service is used
    to migrate databases from one source to another, which isn’t what you are trying to accom-
    plish here. Option D is incorrect because you cannot SSH into the

    Amazon RDS database, which is an AWS managed service.


  13. D. The leader node acts as the SQL endpoint and receives queries from client applications,
    parses the queries, and develops query execution plans. Option A is incorrect because the
    compute nodes execute the query execution plan. However, the leader node is where you
    will submit the actual query. Options B and C are incorrect because there is no such thing
    as a cluster or master node in Amazon Redshift.

  14. B. Amazon Neptune is a managed graph database service, which can be used to build
    recommendation applications. Option A is incorrect, because Amazon RDS is a managed
    database service and you are looking for a graph database. Option C is incorrect. Amazon
    ElastiCache is a caching managed database service. Option D is incorrect. Amazon Red-
    shift is a data warehouse service.

  15. B. A global secondary index enables you to use a different partition key or primary key in
    addition to a different sort key. Option A is incorrect because a local secondary index can
    only have a different sort key. Option C is incorrect. A new DynamoDB table would not solve
    the issue. Option D is incorrect because it is possible to accomplish this.

  16. D. The application is configured to perform an eventually consistent read, which may not
    return the most up-to-date data. Option A is incorrect—increasing RCUs does not solve
    the underlying issue. Option B is incorrect because this is a read issue, not a write issue.
    Option C is incorrect. There is no need to refactor the entire application, because the issue
    is solvable.

    xlviii Answers to Assessment Test


  17. B. DynamoDB Local is the downloadable version of DynamoDB that enables you to write
    and test applications without accessing the web service. Option A is incorrect. Although
    you can create a new table, there is a cost associated with this option, so it is not the best
    option. Option C is incorrect. Even though you can use another NoSQL database, your
    team is already using DynamoDB. This strategy would require them to learn a new data-
    base platform. Additionally, you would have to migrate the database to DynamoDB after
    development is done. Option D is incorrect for the same reasons as option C.

  18. D. The AWS Encryption SDK is a client-side library designed to streamline data security
    operations so that customers can follow encryption best practices. It supports the manage-
    ment of data keys, encryption and decryption activities, and the storage of encrypted data.
    Thus, option D is correct.

  19. A. Options B, C, and D refer to more outdated encryption algorithms. By default, the AWS
    Encryption SDK uses the industry-recommended AES-256 algorithm.

  20. B. Encryption of Amazon EBS volumes is optional.


  21. B. Elastic Beanstalk automatically deletes your Amazon RDS instance when your environ-
    ment is deleted and does not automatically retain the data. You must create a snapshot of
    the Amazon RDS instance to retain the data.

  22. D. Elastic Beanstalk cannot make automated changes to the policies attached to the service
    roles and instance roles.

  23. C. Option C is correct because if a revision does not pass a manual approval transition
    (either by expiring or by being rejected), it is treated as a failed revision. Successive revi-
    sions can then progress past this approval gate (if they are approved). Pipeline actions for a
    specific revision will not continue past a rejected approval gate, so option A is incorrect. A
    notification can be sent to an Amazon Simple Notification Service (Amazon SNS) topic that
    you specify when a revision reaches a manual approval gate, but no additional notification
    is sent if a change is rejected; therefore, option B is incorrect. Option D is incorrect, as AWS
    CodePipeline does not have a concept of “cloning” revisions.


  24. B. Though option D would be time-consuming, it is still possible to create files in the AWS
    CodeCommit console. Option A is a recommended strategy for migrating a repository con-
    taining a large number of files. Option C is also a valid strategy for smaller repositories.
    However, there is no way to sync files directly from an Amazon S3 bucket to an AWS Code-
    Commit repository. Thus, option B is correct.

  25. C. Option A is not recommended, because storing binary files in a Git-based repository
    incurs significant storage costs. Option B can work. However, you would have to pay
    additional data transfer costs any time a build is started. Option C is the most appropriate
    choice, because you can update the build container any time you need to change the files.
    Option D is incorrect, as AWS CodeBuild does not limit the size of files that can be used.

  26. C. Amazon Simple Storage Service (Amazon S3) bucket names are globally unique and
    cannot be changed after a bucket is created. Thus, options A and B are incorrect. Option
    D is incorrect because the resource is not being deleted, only updated. Option C is correct
    because you must create a replacement bucket when changing this property in AWS
    CloudFormation.

    Answers to Assessment Test xlix


  27. B. Option B is correct because you can manage resources declared in a stack entirely within
    AWS CloudFormation by performing stack updates. Manually updating the resource out-
    side of AWS CloudFormation (using the AWS Management Console, AWS CLI, or AWS
    SDK) will result in inconsistencies between the state expected by AWS CloudFormation and
    the actual resource state. This can cause future stack operations to fail. Thus, options A, C,
    and D are incorrect.

  28. C. Option A is incorrect because this is not the only time configure events run on instances
    in a stack. Options B and D are incorrect because the configure event does not run after a
    deploy event. AWS OpsWorks Stacks issues a configure lifecycle event on all instances in a
    stack any time a single instance goes offline or comes online. This is so that all instances in
    a stack can be made “aware” of the instance’s status. Thus, option C is correct.

  29. A, B, C. AWS OpsWorks Stacks includes the ability to manage AWS resources such as
    Elastic IP addresses, EBS volumes, and Amazon RDS instances. Thus, options A, B, and C
    are correct. Options D and E are incorrect because OpsWorks Stacks does not include any
    automatic integrations with Amazon ElastiCache or Amazon Redshift.

  30. A. Option A is correct because Simple Active Directory (Simple AD) can be used to authen-
    ticate users of Amazon WorkDocs. Options B, C, and D are incorrect because Amazon
    Cognito is an identity provider (IdP), and you cannot use Simple AD to authenticate users
    of Amazon EC2 or Amazon S3.

  31. B. Amazon Cognito acts as an identity provider (IdP) to mobile applications, eliminating
    the need to embed credentials into the web application itself. Option A is incorrect because
    if a customer is currently using Active Directory as their IdP, it is not good practice to cre-
    ate another IdP to operate and manage. Option C is incorrect because an Amazon Aurora
    database that is used to track data does not assign policies. Option D is incorrect because
    you can use Amazon Cognito to control an application’s access to either an S3 bucket or an
    Amazon S3 object. You don’t use it to directly control access to that bucket or object.

  32. A. Option A is correct because you want to ingest into Amazon Kinesis Data Streams, pass
    that into Amazon Kinesis Data Analytics, and finally feed that data into Amazon Kinesis
    Data Firehose. Option B is incorrect because Kinesis Data Firehose cannot run SQL que-
    ries. Option C is incorrect because Kinesis Data Streams cannot run SQL queries. Option
    D is incorrect because Kinesis Data Analytics cannot run SQL queries against data in Ama-
    zon SQS.

  33. D. Option D is correct because Amazon DynamoDB Streams allows Amazon DynamoDB
    to publish a message every time there is a change in a table. This solution is performant
    and cost-effective. Option A is incorrect because if you add an item to the
    orders table in
    DynamoDB, it does not automatically produce messages in Amazon Simple Queue Service
    (Amazon SQS). Options B and C are incorrect because if you check the
    orders table every
    minute or every second, it will degrade performance and increase costs.


  34. D. AWS Lambda supports Amazon DynamoDB event streams as an event source, which
    can be polled. You can configure Lambda to poll this stream, look for changes, and create
    a trigger. Option A is incorrect because this can be accomplished with DynamoDB event
    streams. Option B is incorrect because this can be accomplished with Lambda. Option C
    DynamoDB is a supported event source for Lambda.

    l Answers to Assessment Test


  35. A. AWS Lambda uses containers to operate and is a managed service—you cannot access
    the underlying infrastructure. This is a benefit because your organization does not need to
    worry about security patching and other system maintenance. Option B is incorrect—you
    cannot access the infrastructure. Recall that Lambda is serverless. Option C is incorrect.
    AWS Support cannot provide access to the direct environment. Option D is incorrect—the
    Solutions Architect cannot provide direct access to the environment.

  36. D. AWS Lambda uses three factors when determining cost: the amount of memory allo-
    cated, the amount of compute time spent on a function (in 100-ms increments), and the
    number of times you execute or trigger a function. Options A, B, and C are all incorrect
    because Lambda is billed based on memory allocated, compute time spent on a function in
    100-ms increments, and the number of times that you execute or trigger a function.

  37. C, D. Option A is incorrect because AWS CloudFormation is a service that helps you model
    and set up your AWS resources. Option B is incorrect because you use Amazon S3 as a stor-
    age tool for the internet. Options C and D are correct because they are both caching tools.

  38. C. Option A is incorrect, as authorizers enable you to control access to your APIs by using
    Amazon Cognito or an AWS Lambda function. Option B is incorrect because API keys are
    used to provide customers to your API, which is useful for selling your API. Option C is the
    correct answer. You can use stages to create a separate path with multiple endpoints, such
    as development and production. Option D is incorrect, as CORS is used to allow one ser-
    vice to call another service.


  39. D. API Gateway supports all of the methods listed. GET, POST, PUT, PATCH, DELETE, HEAD,
    and
    OPTIONS are all supported methods.

  40. D. With Amazon API Gateway, you can enable authorization for a particular method with
    IAM policies, AWS Lambda custom authorizers, and Amazon Cognito user pools. Options
    A, B, and C are all correct, but option D is the best option because it combines all of them.

  41. B. Option A is incorrect. Though AWS SAM is needed for the YAML/JSON template
    defining the function, it does not allow for testing the AWS Lambda function locally.
    Option B is the correct answer. AWS SAM CLI allows you to test the Lambda function
    locally. Option C is incorrect. AWS CloudFormation is used to deploy resources to the AWS
    Cloud. Option D is incorrect because AWS SAM CLI is the tool to test Lambda functions
    locally.


  42. D. Option A is incorrect. Amazon EC2 is a virtual machine service. Option B is incorrect
    because Amazon ElastiCache deploys clusters of machines, which you are then responsible
    for scaling. Option C is incorrect because Elastic Beanstalk deploys full stack applications
    by using Amazon EC2. Option D is correct because ElastiCache can store session state in a
    NoSQL database. This option is also serverless.

  43. B. With Amazon Cognito, you can create user pools to store user profile information and
    store attributes such as user name, phone number, address, and so on. Option A is incor-
    rect. Amazon CloudFront is a content delivery network (CDN). Option C is incorrect.
    Amazon Kinesis is a service that you can implement to collect, process, and analyze stream-
    ing data in real time. Option D is incorrect. By using AWS Lambda, you can create custom
    programming functions for compute processing.

    Answers to Assessment Test li


  44. A, B, C. Option D is incorrect because when compared to the other options, a bank bal-
    ance is not likely to be stored in a cache; it is probably not data that is retrieved as fre-
    quently as the others. Options A, B, and C are all better data candidates to cache because
    multiple users are more likely to access them repeatedly. However, you could also cache the
    bank account balance for shorter periods if the database query is not performing well.

  45. A, D. Options A and D are correct because Amazon ElastiCache supports both the Redis
    and Memcached open source caching engines. Option B is incorrect because MySQL is not a
    caching engine—it is a relational database engine. Option C is incorrect because Couchbase
    is a NoSQL database and not one of the caching engines that ElastiCache supports.

  46. A. Amazon CloudWatch does not aggregate data across Regions; therefore, option A is
    correct.

  47. A, B, D. Amazon CloudWatch alarms changes to a state other than INSUFFICIENT_DATA only
    when the alarm resource has had sufficient time to initialize and there is sufficient data avail-
    able for the specified metric and period. Option C is incorrect because permissions for sending
    metrics to CloudWatch are the responsibility of the resource sending the data. Option D is
    incorrect because the alarm does not create successfully unless it has a valid period.

  48. C. General-purpose instances provide a balance of compute, memory, and network-
    ing resources. T2 instances are a low-cost option that provides a small amount of CPU

    resources that can be increased in short bursts when additional cycles are available. They
    are well suited for lower-throughput applications, such as administrative applications or
    low-traffic websites. For more details on the instance types, see
    https://aws.amazon

    .com/ec2/instance-types/.

  49. A. AWS Cost Explorer reflects the cost and usage of Amazon Elastic Compute Cloud
    (Amazon EC2) instances over the most recent 13 months and forecasts potential spending
    for the next 3 months. By using Cost Explorer, you can examine patterns on how much you
    spend on AWS resources over time, identify areas that need further inquiry, and view trends
    that help you understand your costs. In addition, you can specify time ranges for the data
    and view time data by day or by month. Option D is incorrect because Amazon EC2 Auto
    Scaling helps you to maintain application availability and enables you to add or remove EC2
    instances automatically according to conditions that you define. It does not give you insights
    into costs incurred.


  50. B. You can use tags to control permissions. Using IAM policies, you can enforce the tag to
    gain precise control over access to resources, ownership, and accurate cost allocation. Option
    A is incorrect because eventually deployments become unmanageable, given the scale and rate
    at which resources get deployed in a successful organization. Options C and D are incorrect
    because Amazon CloudWatch and AWS Cost Explorer are unrelated to access controls and
    measures, and these tools monitor resources after they are created.

  51. D. You can choose among the three payment options when you purchase a Standard
    or Convertible Reserved Instance. With the All Upfront option, you pay for the entire

    Reserved Instance term with one upfront payment. This option provides you with the larg-
    est discount compared to On-Demand Instance pricing. With the Partial Upfront option,
    you make a low upfront payment and then are charged a discounted hourly rate for the
    instance for the duration of the Reserved Instance term. The No Upfront option requires no
    upfront payment and provides a discounted hourly rate for the duration of the term.

    lii Answers to Assessment Test


  52. B. The performance of the transaction-heavy workloads depends primarily on IOPS; SSD-
    backed volumes are designed for transactional, IOPS-intensive database workloads, boot vol-
    umes, and workloads that require high IOPS. For more information, see
    https://docs.aws.
    amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html
    .

  53. D. Options A, B, and C help in building a high-speed data storage layer that stores a subset of
    data. This data is typically transient in nature so that future requests for that data are served
    up faster than is possible by accessing the data’s primary storage location. Option D only sup-
    plements the setup of your own caching mechanism, and that is not the preferred solution for
    this scenario. For more information, see
    https://aws.amazon.com/caching/aws-caching/.

  54. C. Keeping data together is a basic characteristic of a NoSQL database such as Amazon
    DynamoDB. Keeping related data in proximity has a major impact on cost and perfor-
    mance. Instead of distributing related data items across multiple tables, keep related items
    in your NoSQL system as close together as possible. Options A, B, and D are typical char-
    acteristics of a relational database.

  55. C. The status code option suggests an inefficient partition key, because few possible status
    codes lead to uneven distribution of data and cause request throttling. Options A, B, and D
    suggest the efficient partition keys because of their distinct nature, which leads to an even
    distribution of the data. For more information, see:

    https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/
    bp-partition-key-design.html

  56. A, B, C. Using a serverless approach means not having to manage servers and not incurring
    compute costs when there is no user traffic. This is achieved while still offering instant scale
    to meet high demand, such as a flash sale on an ecommerce site or a social media mention
    that drives a sudden wave of traffic. Option D is incorrect because AWS Lambda runs your
    code on a high-availability compute infrastructure and performs all the administration

    of the compute resources, including server and operating system maintenance, capacity
    provisioning and automatic scaling, code and security patch deployment, and code moni-
    toring and logging. Option E is incorrect because you can configure Lambda functions to
    run up to 15 minutes per execution. As a best practice, set the timeout value based on your
    expected execution time to prevent your function from running longer than intended.

  57. A, B. Use this feature to analyze storage access patterns to help you decide when to transi-
    tion the right data to the right storage class. This feature observes data access patterns to
    help you determine when to transition less frequently accessed STANDARD storage to the
    STANDARD_IA storage class. Option B is correct. A cost allocation tag is a key-value pair
    that you associate with an Amazon S3 bucket. To manage storage data most effectively,
    you can use these tags to categorize your Amazon S3 objects and filter on these tags in your
    data lifecycle policies. Options C and D are incorrect. These options focus on establishing
    a solution with an efficient data transfer. Option E is incorrect. With AWS Budgets, you
    can set custom budgets that alert you when your costs or usage exceed (or are forecasted to
    exceed) your budgeted amount.

    Review Questions 33


    Review Questions

    1. Which of the following is typically used to sign API calls to AWS services?

      1. Customer master key (CMK)

      2. AWS access key

      3. IAM user name and password

      4. Account number


    2. When you make API calls to AWS services, for most services those requests are directed at a
      specific endpoint that corresponds to which of the following?

      1. AWS facility

      2. AWS Availability Zone

      3. AWS Region

      4. AWS edge location


    3. When you’re configuring a local development machine to make AWS API calls, which of the
      following is the simplest secure method of obtaining an API credential?

      1. Create an IAM user, assign permissions by adding the user to an IAM group with IAM
        policies attached, and generate an access key for programmatic access.

      2. Sign in with your email and password, and visit My Security Credentials to generate an
        access key.

      3. Generate long-term credentials for a built-in IAM role.

      4. Use your existing user name and password by configuring local environment variables.


    4. You have a large number of employees, and each employee already has an identity in an
      external directory. How might you manage AWS API credentials for each employee so that
      they can interact with AWS for short-term sessions?

      1. Create an IAM user and credentials for each member of your organization.

      2. Share a single password through a file stored in an encrypted Amazon S3 bucket.

      3. Define a set of IAM roles, and establish a trust relationship between your directory
        and AWS.

      4. Configure the AWS Key Management Service (AWS KMS) to store credentials for each user.


    5. You have a team member who needs access to write records to an existing Amazon
      DynamoDB table within your account. How might you grant write permission to this
      specific table and only this table?

      1. Write a custom IAM policy that specifies the table as the resource, and attach that
        policy to the IAM user for the team member.

      2. Attach the DynamoDBFullAccess managed policy to the IAM role used by the team
        member.

      3. Delete the table and recreate it. Permissions are set when the DynamoDB table
        is created.

      4. Create a new user within DynamoDB, and assign table write permissions.

        34 Chapter 1 Introduction to AWS Cloud API


    6. You created a Movies DynamoDB table in the AWS Management Console, but when you
      try to list your DynamoDB tables by using the Java SDK, you do not see this table. Why?

      1. DynamoDB tables created in the AWS Management Console are not accessible from
        the API.

      2. Your SDK may be listing your resources from a different AWS Region in which the
        table does not exist.

      3. The security group applied to the Movies table is keeping it hidden.

      4. Listing tables is supported only in C# and not in the Java SDK.


    7. You make an API request to describe voices offered by Amazon Polly by using the AWS CLI,
      and you receive the following error message:

      Could not connect to the endpoint URL:

      https://polly.us-east-1a.amazonaws.com/v1/voices

      What went wrong?

      1. Your API credentials have been rejected.

      2. You have incorrectly configured the AWS Region for your API call.

      3. Amazon Polly does not offer a feature to describe the list of available voices.

      4. Amazon Polly is not accessible from the AWS CLI because it is only in the AWS SDK.


    8. To what resource does this IAM policy grant access, and for which actions?


      {

      "Version": "2012-10-17",

      "Statement": {
      "Effect": "Allow",

      "Action": "s3:ListBucket",

      "Resource": "arn:aws:s3:::example_bucket"

      }

      }


      1. The policy grants full access to read the objects in the Amazon S3 bucket.

      2. The policy grants the holder the permission to list the contents of the Amazon S3
        bucket called
        example_bucket.

      3. Nothing. The policy was valid only until October 17, 2012 (2012-10-17), and is now
        expired.

      4. The policy grants the user access to list the contents of all Amazon S3 buckets within
        the current account.

    9. When an IAM user makes an API call, that user’s long-term credentials are valid in which
      context?

      1. Only in the AWS Region in which their identity resides

      2. Only in the Availability Zone in which their identity resides

        Review Questions 35


      3. Only in the edge location in which their identity resides

      4. Across multiple AWS Regions


    10. When you use identity federation to assume a role, where are the credentials you use to
      make AWS API calls generated?

      1. Access key ID and secret access key are generated locally on the client.

      2. The AWS Security Token Service (AWS STS) generates the access key ID, secret access
        key, and session token.

      3. The AWS Key Management Service (AWS KMS) generates a customer master key
        (CMK).

      4. Your Security Assertion Markup Language (SAML) identity provider generates the
        access key ID, secret access key, and session token.

    11. You have an on-premises application that needs to sample data from all your Amazon
      DynamoDB tables. You have defined an IAM user for your application called
      TableAuditor. How can you give the TableAuditor user read access to new DynamoDB
      tables as soon they are created in your account?

      1. Define a custom IAM policy that lists each DynamoDB table. Revoke the access key,
        and issue a new access key for
        TableAuditor when tables are created.

      2. Create an IAM user and attach one custom IAM policy per AWS Region that has
        DynamoDB tables.

      3. Add the TableAuditor user to the IAM role DynamoDBReadOnlyAccess.

      4. Attach the AWS managed IAM policy AmazonDynamoDBReadOnlyAccess to the

        TableAuditor user.

    12. The principals who have access to assume an IAM role are defined in which document?

      1. IAM access policy

      2. IAM trust policy

      3. MS grant token

      4. AWS credentials file


    13. A new developer has joined your small team. You would like to help your team member set
      up a development computer for access to the team account quickly and securely. How do
      you proceed?

      1. Generate an access key based on your IAM user, and share it with your team member.

      2. Create a new directory with AWS Directory Service, and assign permissions in the AWS
        Key Management Service (AWS KMS).

      3. Create an IAM user, add it to an IAM group that has the appropriate permissions, and
        generate a long-term access key.

      4. Create a new IAM role for this team member, assign permissions to the role, and
        generate a long-term access key.

        36 Chapter 1 Introduction to AWS Cloud API


    14. You have been working with the Amazon Polly service in your application by using the
      Python SDK for Linux. You are building a second application in C#, and you would like to
      run that application on a separate Windows Server with .NET. How can you proceed?

      1. Migrate all your code for all applications to C#, and modify your account to a
        Windows account.

      2. Go to the Amazon Polly service, and change the supported languages to include .NET.

      3. Install the AWS SDK for .NET on your Windows Server, and leave your existing
        application unchanged.

      4. Implement a proxy service that accepts your API requests, and translate them to
        Python.

    15. You are a Virginia-based company, and you have been asked to implement a custom
      application exclusively for customers in Australia. This application has no dependencies on
      any of your existing applications. What is a method you use to keep the customer latency to
      this new application low?

      1. Set up an AWS Direct Connect (DX) between your on-premises environment and US
        East (N Virginia), and host the application from your own data center in Virginia.

      2. Create all resources for this application in the Asia Pacific (Sydney) Region, and
        manage them from your current account.

      3. Deploy the application to the US East (N Virginia) Region, and select Amazon EC2
        instances with enhanced networking.

      4. It does not matter which region you select, because all resources are automatically
        replicated globally.

80 Chapter 2 Introduction to Compute and Networking


Review Questions

  1. When you launch an Amazon Elastic Compute Cloud (Amazon EC2) instance, which of the
    following is the most specific type of AWS entity in which you can place it?

    1. Region

    2. Availability Zone

    3. Edge location

    4. Data center


  2. You have saved SSH connection information for an Amazon Elastic Compute Cloud (Amazon
    EC2) instance that you launched in a public subnet. You previously stopped the instance the
    last time you used it. Now that you have started the instance, you are unable to connect to
    the instance using the saved information. Which of the following could be the cause?

    1. Your SSH key pair has automatically expired.

    2. The public IP of the instance has changed.

    3. The security group rules have expired.

    4. SSH is enabled only for the first boot of an Amazon EC2 instance.


  3. You are working from a new location today. You are unable to initiate a Remote Desktop
    Protocol (RDP) to your Windows instance, which is located in a public subnet. What could
    be the cause?

    1. Your new IP address may not match the inbound security group rules.

    2. Your new IP address may not match the outbound security group rules.

    3. RDP is not available for Windows instances, only SSH.

    4. RDP is enabled only for the first 24 hours of your instance runtime.


  4. You have a backend Amazon EC2 instance providing a web service to your web server
    instances. Your web servers are in a public subnet. You would like to block inbound
    requests from the internet to your backend instance but still allow the instance to make
    API requests over the public internet. What steps must you take? (Select TWO.)

    1. Launch the instance in a private subnet and rely on a NAT gateway in a public subnet
      to forward outbound internet requests.

    2. Configure the security group for the instance to explicitly deny inbound requests from
      the internet.

    3. Configure the network access control list (network ACL) for the public subnet to
      explicitly deny inbound web requests from the internet.

    4. Modify the inbound security group rules for the instance to allow only inbound
      requests from your web servers.

      Review Questions 81


  5. You have launched an Amazon Elastic Compute Cloud (Amazon EC2) instance and loaded
    your application code on it. You have now discovered that the instance is missing applica-
    tions on which your code depends. How can you resolve this issue?

    1. Modify the instance profile to include the software dependencies.

    2. Create an AWS Identity and Access Management (IAM) user, and sign in to the
      instance to install the dependencies.

    3. Sign in to the instance as the default user, and install any additional dependencies that
      you need.

    4. File an AWS Support ticket, and request to install the software on your instance.


  6. How can code running on an Amazon Elastic Compute Cloud (Amazon EC2) instance
    automatically discover its public IP address?

    1. The public IP address is presented to the OS on the instance automatically. No extra
      steps are required.

    2. The instance can query another Amazon EC2 instance in the same Amazon Virtual
      Private Cloud (Amazon VPC).

    3. You must use a third-party service to look up the public IP.

    4. The instance can make an HTTP query to the Amazon EC2 metadata service at
      169.254.169.254.

  7. How can you customize the software of your Amazon Elastic Compute Cloud (Amazon
    EC2) instance beyond what the Amazon Machine Image (AMI) provides?

    1. Provide a user data attribute at launch that contains a script or directives to install
      additional packages.

    2. Additional packages are installed automatically by placing them in a special Amazon
      Simple Storage Service (Amazon S3) bucket in your account.

    3. You do not have permissions to install new software on Amazon EC2 aside from what
      is in the AMI.

    4. Unlock the instance using the AWS Key Management Service (AWS KMS) and then
      sign in to install new packages.

  8. You have a process running on an Amazon Elastic Compute Cloud (Amazon EC2) instance
    that exceeds the 2 GB of RAM allocated to the instance. This is causing the process to run
    slowly. How can you resolve the issue?

    1. Stop the instance, change the instance type to one with more RAM, and then start the
      instance.

    2. Modify the RAM allocation for the instance while it is running.

    3. Take a snapshot of the data and then launch a new instance. You cannot change the
      RAM allocation.

    4. Send an email to AWS Support to install additional RAM on the server.

      82 Chapter 2 Introduction to Compute and Networking


  9. You have launched an Amazon Elastic Compute Cloud (Amazon EC2) Windows instance,
    and you would like to connect to it using the Remote Desktop Protocol. The instance is in
    a public subnet and has a public IP address. How do you find the password to the Adminis-
    trator account?

    1. Decrypt the password by using the private key from the Amazon EC2 key pair that you
      used to launch the instance.

    2. Use the password that you provided when you launched the instance.

    3. Create a new AWS Identity and Access Management (IAM) role, and use the password
      for that role.

    4. Create an IAM user, and use the password for that user.


  10. What steps must you take to ensure that an Amazon EC2 instance can receive web requests
    from customers on the internet? (Select THREE.)

    1. Assign a public IP address to the instance.

    2. Launch the instance in a subnet where the route table routes internet-bound traffic to
      an internet gateway.

    3. Launch the instance in a subnet where the route table rules send internet-bound traffic
      to a NAT gateway.

    4. Set the outbound rules for the security group to allow HTTP and HTTPS traffic.

    5. Set the inbound rules for the security group to allow HTTP and HTTPS traffic.


  11. Which of the following are true about Amazon Machine Images (AMI)? (Select TWO.)

    1. AMI can be used to launch one or multiple Amazon EC2 instances.

    2. AMI is automatically available in all AWS Regions.

    3. All AMIs are created and maintained by AWS.

    4. AMIs are available for both Windows and Linux instances.


  12. Which of the following are true about Amazon Elastic Compute Cloud (Amazon EC2)
    instance types? (Select TWO.)

    1. All Amazon EC2 instance types include instance store for ephemeral storage.

    2. All Amazon EC2 instance types can use EBS volumes for persistent storage.

    3. Amazon EC2 instances cannot be resized once launched.

    4. Some Amazon EC2 instances may have access to GPUs or other hardware
      accelerators.

  13. Which of the following actions are valid based on the Amazon Elastic Compute Cloud
    (Amazon EC2) instance lifecycle? (Select TWO.)

    1. Starting a previously terminated instance

    2. Starting a previously stopped instance

    3. Rebooting a stopped instance

    4. Stopping a running instance

      Review Questions 83


  14. You have a development Amazon Elastic Compute Cloud (Amazon EC2) instance where
    you have installed Apache Web Server and MySQL. How do you verify that the web server
    application can communicate with the database given that they are both running on the
    same instance?

    1. Modify the security group for the instance.

    2. Assign the instance a public IP address.

    3. Modify the network access control list (network ACL) for the instance.

    4. No extra configuration is required.


  15. What type of route must exist in the associated route table for a subnet to be a public subnet?

    1. A route to a VPN gateway

    2. Only the local route is required.

    3. A route to an internet gateway

    4. A route to a NAT gateway or NAT instance

    5. A route to an Amazon VPC endpoint


  16. What type of route must exist in the associated route table for a subnet to be a private sub-
    net that allows outbound internet access?

    1. A route to a VPN gateway

    2. Only the local route is required.

    3. A route to an internet gateway

    4. A route to a NAT gateway or NAT instance

    5. A route to an Amazon Virtual Private Cloud (Amazon VPC) endpoint


  17. Which feature of Amazon Virtual Private Cloud (Amazon VPC) enables you to see which
    network requests are being accepted or rejected in your Amazon VPC?

    1. Internet gateway

    2. NAT gateway

    3. Route table

    4. Amazon VPC Flow Log


  18. Which AWS service enables you to track the CPU utilization of an Amazon Elastic
    Compute Cloud (Amazon EC2) instance?

    1. AWS Config

    2. AWS Lambda

    3. Amazon CloudWatch

    4. Amazon Virtual Private Cloud (Amazon VPC)


  19. What happens to the data stored on an Amazon Elastic Block Store (Amazon EBS) volume
    when you stop an Amazon Elastic Compute Cloud (Amazon EC2) instance?

    1. The data is moved to Amazon Simple Storage Service (Amazon S3).

    2. The data persists in the EBS volume.

    3. The volume is deleted.

    4. An EBS-backed instance cannot be stopped.

      84 Chapter 2 Introduction to Compute and Networking


  20. Which programming language can you use to write the code that runs on an Amazon EC2
    instance?

    1. C++

    2. Java

    3. Ruby

    4. JavaScript

    5. Python

    6. All of the above


  21. You have launched an Amazon EC2 instance in a public subnet. The instance has a public
    IP address, and you have confirmed that the Apache web server is running. However, your
    internet users are unable to make web requests to the instance. How can you resolve the
    issue? (Select TWO.)

    1. Modify the security group to allow outbound traffic on port 80 to anywhere.

    2. Modify the security group for the web server to allow inbound traffic port 80 from
      anywhere.

    3. Modify the security group for the web server to allow inbound traffic on port 443
      from anywhere.

    4. Modify the security group to allow outbound traffic from port 443 to anywhere.


  22. Which of the following are the customer’s responsibility concerning Amazon EC2
    instances? (Select TWO.)

    1. Decommissioning storage hardware

    2. Patching the guest operating system

    3. Securing physical access to the host machine

    4. Managing the sign-in accounts and credentials on the guest operating system

    5. Maintaining the software that runs on the underlying host machine

170 Chapter 3 Hello, Storage


Review Questions

  1. You are developing an application that will run across dozens of instances. It uses some
    components from a legacy application that requires some configuration files to be copied
    from a central location and be held on a volume local to each of the instances. You

    plan to modify your application with a new component in the future that will hold this
    configuration in Amazon DynamoDB. However, in the interim, which storage option
    should you use that will provide the lowest cost and the lowest latency for your application
    to access the configuration files?

    1. Amazon S3

    2. Amazon EBS

    3. Amazon EFS

    4. Amazon EC2 instance store


  2. In what ways does Amazon Simple Storage Service (Amazon S3) object storage differ from
    block and file storage? (Select TWO.)

    1. Amazon S3 stores data in fixed size blocks.

    2. Objects are identified by a numbered address.

    3. Object can be any size.

    4. Objects contain both data and metadata.

    5. Objects are stored in buckets.


  3. You are restoring an Amazon Elastic Block Store (Amazon EBS) volume from a snapshot.
    How long will it take before the data is available?

    1. It depends on the provisioned size of the volume.

    2. The data will be available immediately.

    3. It depends on the amount of data stored on the volume.

    4. It depends on whether the attached instance is an Amazon EBS–optimized instance.


  4. What are some of the key characteristics of Amazon Simple Storage Service (Amazon S3)?
    (Select THREE.)

    1. All objects have a URL.

    2. Amazon S3 can store unlimited amounts of data.

    3. Buckets can be mounted to the file system of multiple Amazon EC2 instances.

    4. Amazon S3 uses a Representational State Transfer (REST) application program
      interface (API).

    5. You must pre-allocate the storage in a bucket.

      Review Questions 171


  5. Amazon S3 Glacier is well-suited to data that is which of the following? (Select TWO.)

    1. Infrequently or rarely accessed

    2. Must be immediately available when needed

    3. Is available after a three- to five-hour restore period

    4. Is frequently erased within 30 days


  6. You have valuable media files hosted on AWS and want them to be served only to authen-
    ticated users of your web application. You are concerned that your content could be stolen
    and distributed for free. How can you protect your content?

    1. Use static web hosting.

    2. Generate presigned URLs for content in the web application.

    3. Use AWS Identity and Access Management (IAM) policies to restrict access.

    4. Use logging to track your content.


  7. Which of the following are features of Amazon Elastic Block Store (Amazon EBS)?
    (Select TWO.)

    1. Data stored on Amazon EBS is automatically replicated within an Availability Zone.

    2. Amazon EBS data is automatically backed up to tape.

    3. Amazon EBS volumes can be encrypted transparently to workloads on the attached
      instance.

    4. Data on an Amazon EBS volume is lost when the attached instance is stopped.


  8. Which option should you choose for Amazon EFS when tens, hundreds, or thousands of
    Amazon EC2 instances will be accessing the file system concurrently?

    1. General-Purpose performance mode

    2. RAID 0

    3. Max I/O performance mode

    4. Change to a larger instance


  9. Which of the following must be performed to host a static website in an Amazon Simple
    Storage Service (Amazon S3) bucket? (Select THREE.)

    1. Configure the bucket for static hosting, and specify an index and error document.

    2. Create a bucket with the same name as the website.

    3. Enable File Transfer Protocol (FTP) on the bucket.

    4. Make the objects in the bucket world-readable.

    5. Enable HTTP on the bucket.

      172 Chapter 3 Hello, Storage


  10. You have a workload that requires 1 TB of durable block storage at 1,500 IOPS during
    normal use. Every night there is an extract, transform, load (ETL) task that requires 3,000
    IOPS for 15 minutes. What is the most appropriate volume type for this workload?

    1. Use a Provisioned IOPS SSD volume at 3,000 IOPS.

    2. Use an instance store.

    3. Use a general-purpose SSD volume.

    4. Use a magnetic volume.


  11. Which statements about Amazon S3 Glacier are true? (Select THREE.)

    1. It stores data in objects that live in buckets.

    2. Archives are identified by user-specified key names.

    3. Archives take 3–5 hours to restore.

    4. Vaults can be locked.

    5. It can be used as a standalone service and as an Amazon S3 storage class.


  12. You are developing an application that will be running on several hundred Amazon EC2
    instances. The application on each instance will be required to reach out through a file
    system protocol concurrently to a file system holding the files. Which storage option should
    you choose?

    1. Amazon EFS

    2. Amazon EBS

    3. Amazon EC2 instance store

    4. Amazon S3


  13. You need to take a snapshot of an Amazon Elastic Block Store (Amazon EBS) volume. How
    long will the volume be unavailable?

    1. It depends on the provisioned size of the volume.

    2. The volume will be available immediately.

    3. It depends on the amount of data stored on the volume.

    4. It depends on whether the attached instance is an Amazon EBS–optimized instance.


  14. Amazon Simple Storage Service (S3) bucket policies can restrict access to an Amazon S3
    bucket and objects by which of the following? (Select THREE.)

    1. Company name

    2. IP address range

    3. AWS account

    4. Country of origin

    5. Objects with a specific prefix

      Review Questions 173


  15. Which of the following are not appropriate use cases for Amazon Simple Storage Service
    (Amazon S3)? (Select TWO.)

    1. Storing static web content or hosting a static website

    2. Storing a file system mounted to an Amazon Elastic Compute Cloud (Amazon EC2) instance

    3. Storing backups for a relational database

    4. Primary storage for a database

    5. Storing logs for analytics


  16. Which features enable you to manage access to Amazon Simple Storage Service (Amazon S3)
    buckets or objects? (Select THREE.)

    1. Enable static website hosting on the bucket.

    2. Create a presigned URL for an object.

    3. Use an Amazon S3 Access Control List (ACL) on a bucket or object.

    4. Use a lifecycle policy.

    5. Use an Amazon S3 bucket policy.


  17. Your application stores critical data in Amazon Simple Storage Service (Amazon S3),
    which must be protected against inadvertent or intentional deletion. How can this data be
    protected? (Select TWO.)

    1. Use cross-region replication to copy data to another bucket automatically.

    2. Set a vault lock.

    3. Enable versioning on the bucket.

    4. Use a lifecycle policy to migrate data to Amazon S3 Glacier.

    5. Enable MFA Delete on the bucket.


  18. You have a set of users that have been granted access to your Amazon S3 bucket. For
    compliance purposes, you need to keep track of all files accessed in that bucket. To have a
    record of who accessed your Amazon Simple Storage Service (Amazon S3) data and from
    where, what should you do?

    1. Enable versioning on the bucket.

    2. Enable website hosting on the bucket.

    3. Enable server access logging on the bucket.

    4. Create an AWS Identity and Access Management (IAM) bucket policy.

    5. Enable Amazon CloudWatch logs.


  19. What are some reasons to enable cross-region replication on an Amazon Simple Storage
    Service (Amazon S3) bucket? (Select THREE.)

    1. Your compliance requirements dictate that you store data at an even further distance
      than Availability Zones, which are tens of miles apart.

    2. Minimize latency when your customers are in two geographic regions.

    3. You need a backup of your data in case of accidental deletion.

    4. You have compute clusters in two different AWS Regions that analyze the same set of objects.

    5. Your data requires at least five nines of durability.

      174 Chapter 3 Hello, Storage


  20. Your company requires that all data sent to external storage be encrypted before being sent.
    You will be sending company data to Amazon S3. Which Amazon Simple Storage Service
    (Amazon S3) encryption solution will meet this requirement?

    1. Server-Side Encryption with AWS managed keys (SSE-S3)

    2. Server-Side Encryption with customer-provided keys (SSE-C)

    3. Client-side encryption with customer-managed keys

    4. Server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS)


  21. How is data stored in Amazon Simple Storage Service (Amazon S3) for high durability?

    1. Data is automatically replicated to other regions.

    2. Data is automatically replicated within a region.

    3. Data is replicated only if versioning is enabled on the bucket.

    4. Data is automatically backed up on tape and restored if needed.

256 Chapter 4 Hello, Databases


Review Questions

  1. Which of the following does Amazon Relational Database Service (Amazon RDS) manage
    on your behalf? (Select THREE.)

    1. Database settings

    2. Database software installation and patching

    3. Query optimization

    4. Hardware provisioning

    5. Backups


  2. Which AWS database service is best suited for managing highly connected datasets?

    1. Amazon Aurora

    2. Amazon Neptune

    3. Amazon DynamoDB

    4. Amazon Redshift


  3. You are designing an ecommerce web application that will scale to potentially hundreds of
    thousands of concurrent users. Which database technology is best suited to hold the session
    state for large numbers of concurrent users?

    1. Relational database by using Amazon Relational Database Service (Amazon RDS)

    2. NoSQL database table by using Amazon DynamoDB

    3. Data warehouse by using Amazon Redshift

    4. MySQL on Amazon EC2


  4. How many read capacity units (RCUs) do you need to support 25 strongly consistent reads
    per seconds of 15 KB?

    1. 100 RCUs

    2. 25 RCUs

    3. 10 RCUs

    4. 15 RCUs


  5. How many read capacity units (RCUs) do you need to support 25 eventually consistent

    reads per seconds of 15 KB?

    1. 10 RCUs

    2. 25 RCUs

    3. 50 RCUs

    4. 15 RCUs

      Review Questions 257


  6. How many write capacity units (WCUs) are needed to support 100 writers per second of
    512 bytes?

    1. 129 WCUs

    2. 25 WCUs

    3. 10 WCUs

    4. 100 WCUs


  7. Your company is using Amazon DynamoDB, and they would like to implement a write-
    through caching mechanism. They would like to get everything up and running in only
    a few short weeks. Additionally, your company would like to refrain from managing any

    additional servers. You are the lead developer on the project; what should you recommend?

    1. Build your own custom caching application.

    2. Implement Amazon DynamoDB Accelerator (DAX).

    3. Run Redis on Amazon EC2.

    4. Run Memcached on Amazon EC2.


  8. Your company would like to implement a highly available caching solution for its SQL
    database running on Amazon RDS. Currently, all of its services are running in the AWS
    Cloud. As their lead developer, what should you recommend?

    1. Implement your own caching solution on-premises.

    2. Implement Amazon ElastiCache for Redis.

    3. Implement Amazon ElastiCache for Memcached.

    4. Implement Amazon DynamoDB Accelerator (DAX).


  9. A company is looking to run analytical queries and would like to implement a data
    warehouse. It estimates that it has roughly 300 TB worth of data, which is expected to
    double in the next three years. Which AWS service should you recommend?

    1. Relational database by using Amazon Relational Database Service (Amazon RDS)

    2. NoSQL database table by using Amazon DynamoDB

    3. Data warehouse by using Amazon Redshift

    4. Amazon ElastiCache for Redis


  10. A company is experiencing an issue with Amazon DynamoDB whereby the data is taking
    longer than expected to return from a query. You are tasked with investigating the problem.
    After looking at the application code, you realize that a
    Scan operation is being called for a
    large DynamoDB table. What should you do or recommend?

    1. Implement a query instead of a scan, if possible, as queries are more efficient than a
      scan.

    2. Do nothing; the problem should go away on its own.

    3. Implement a strongly consistent read.

    4. Increase the write capacity units (WCUs).

Review Questions 279


Review Questions

  1. Which components are required in an encryption system? (Select THREE.)

    1. A user to upload data

    2. Data to encrypt

    3. A database to store encryption keys

    4. A method to encrypt data

    5. A cryptographic algorithm


  2. Which are the components of key management infrastructure (KMI)? (Select TWO.)

    1. Storage layer

    2. Data layer

    3. Management layer

    4. Encryption layer


  3. Which of the following are methods for you and AWS to provide an encryption method and
    key management infrastructure (KMI)? (Select THREE.)

    1. You control the encryption method and key management, and AWS provides the
      storage component of the KMI.

    2. You control the storage component of the KMI, and AWS provides the encryption
      method and key management.

    3. You control the encryption method and KMI.

    4. AWS controls the encryption method and the entire KMI.

    5. None of the above.


  4. Which option uses AWS Key Management Service (AWS KMS) to manage keys to provide
    server-side encryption to Amazon Simple Storage Service (Amazon S3)?

    1. Amazon S3 managed encryption keys (SSE-S3)

    2. Customer-provided encryption keys (SSE-C)

    3. Use client-side encryption

    4. None of the above


  5. Which AWS encryption service provides asymmetric encryption capabilities?

    1. AWS Key Management Service (AWS KMS).

    2. AWS CloudHSM.

    3. AWS does not provide asymmetric encryption services.

    4. None of the above.

      280 Chapter 5 Encryption on AWS


  6. Which AWS encryption service provides symmetric encryption capabilities? (Select TWO.)

    1. AWS Key Management Service (AWS KMS).

    2. AWS CloudHSM.

    3. AWS does not provide symmetric encryption services.

    4. None of the above.


  7. An organization is using Amazon Simple Storage Service (Amazon S3), and it would like
    to ensure that all objects that are stored in Amazon S3 are encrypted. However, it does not
    want to be responsible for managing any of the encryption keys. As their lead developer,
    which service and feature should you recommend?

    1. Server-side encryption with AWS Key Management Service (SSE-KMS).

    2. Customer-provided encryption keys (SSE-C).

    3. Amazon S3 managed encryption keys (SSE-S3).

    4. This is not possible in AWS.


  8. Which feature of AWS Key Management Service (AWS KMS) enables you to use an AWS
    CloudHSM cluster for the storage of your encryption keys?

    1. Centralized key management

    2. AWS CloudHSM

    3. Custom key stores

    4. S3DistCp


  9. An organization is using AWS Key Management Service (AWS KMS) to support encryption
    and would like to encrypt Amazon Elastic Block Store (Amazon EBS) volumes. It wants

    to encrypt its volumes quickly, with little development time. As their lead developer, what
    should you recommend?

    1. Implement AWS KMS to encrypt the Amazon EBS volumes.

    2. Use open source or third-party encryption tooling.

    3. Use AWS CloudHSM.

    4. AWS does not provide a mechanism to encrypt Amazon EBS volumes.


  10. Which of the following AWS services does not integrate with AWS Key Management
    Service (AWS KMS)?

    1. Amazon Elastic Block Store (Amazon EBS)

    2. Amazon Simple Storage Service (Amazon S3)

    3. Amazon Redshift

    4. None of the above

Review Questions 313


Review Questions

  1. Which of the following AWS services enables you to automate your build, test, deploy, and
    release process every time there is a code change?

    1. AWS CodeCommit

    2. AWS CodeDeploy

    3. AWS CodeBuild

    4. AWS CodePipeline


  2. Which of the following resources can AWS Elastic Beanstalk use to create a web server
    environment? (Select FOUR.)

    1. Amazon Cognito User Pool

    2. AWS Serverless Application Model (AWS SAM) Local

    3. Auto Scaling group

    4. Amazon Elastic Compute Cloud (Amazon EC2)

    5. AWS Lambda


  3. Which of the following languages is not supported by AWS Elastic Beanstalk?

    1. Java

    2. Node.js

    3. Objective C

    4. Go


  4. What does the AWS Elastic Beanstalk service do?

    1. Deploys applications and architecture

    2. Stores static content

    3. Directs user traffic to Amazon Elastic Compute Cloud (Amazon EC2) instances

    4. Works with dynamic cloud changes as an IP address


  5. Which operating systems does AWS Elastic Beanstalk support? (Select TWO.)

    1. Amazon Linux

    2. Ubuntu

    3. Windows Server

    4. Fedora

    5. Jetty

      314 Chapter 6 Deployment Strategies


  6. Which of the following components can AWS Elastic Beanstalk deploy? (Select TWO.)

    1. Amazon Elastic Compute Cloud (Amazon EC2) instances with write capabilities to
      an Amazon DynamoDB table

    2. A worker application using Amazon Simple Queue Service (Amazon SQS)

    3. An Amazon Elastic Container Service (Amazon ECS) cluster supporting
      multiple containers

    4. A mixed fleet of Spot and Reserved Instances with four applications running in each
      environment

    5. A mixed fleet of Reserved Instances scheduled between 9 a.m. to 5 p.m. and On-
      Demand Instances used for processing data workloads when needed randomly

  7. Which of the following operations can AWS Elastic Beanstalk do? (Select TWO.)

    1. Access an Amazon Simple Storage Service (Amazon S3) bucket

    2. Connect to an Amazon Relational Database Service (Amazon RDS) database

    3. Install agents for Amazon GuardDuty service

    4. Create and manage Amazon WorkSpaces


  8. Which service can be used to restrict access to AWS Elastic Beanstalk resources?

    1. AWS Config

    2. Amazon Relational Database Service (Amazon RDS)

    3. AWS Identity and Access Management (IAM)

    4. Amazon Simple Storage Service (Amazon S3)


  9. Which AWS Identity and Access Management (IAM) entities are used when creating an
    environment? (Select TWO.)

    1. Federated role

    2. Service role

    3. Instance profile

    4. Profile role

    5. User name and access keys


  10. Which of the following describes how customers are charged for AWS Elastic Beanstalk?

    1. A monthly fee based on an hourly rate for use.

    2. A one-time upfront cost for each environment running.

    3. No additional charges.

    4. A fee is charged only when scaling to support traffic changes.

      Review Questions 315


  11. Which account is billed for user-accessed AWS resources allocated by AWS Elastic
    Beanstalk?

    1. The account running the services

    2. The cross-account able to access the shared services

    3. The cross-account with the Amazon Simple Storage Service (Amazon S3) bucket hold-
      ing a downloaded copy of the code artifact

    4. All accounts involved


  12. What can you not do to an Amazon Relational Database Service (Amazon RDS) instance
    with AWS Elastic Beanstalk?

    1. Create a database connection.

    2. Create a supported Oracle edition.

    3. Retain a database instance despite the deletion of the environment’s database.

    4. Create a snapshot of the existing database (before deletion).

Review Questions 377


Review Questions

  1. You have two AWS CodeDeploy applications that deploy to the same Amazon EC2 Auto
    Scaling group. The first deploys an e-commerce app, while the second deploys custom
    administration software. You are attempting to deploy an update to one application but
    cannot do so because another deployment is already in progress. You do not see any
    instances undergoing deployment at this time. What could be the cause of this?

    1. If both deployment groups reference the same Auto Scaling group, a failure of the

      first group’s deployment can block the second until the deployment times out. Since the
      instance that failed deployment has been terminated from the Auto Scaling group, the AWS
      CodeDeploy agent is unable to provide results to the service.

    2. The AWS CodeDeploy agent is not installed on the instances as part of the launch
      configuration user data script.

    3. If both deployment groups reference the same Auto Scaling group, a failure of the first
      group’s deployment can block the second until the deployment times out. Since the instance
      that failed deployment has been terminated from the Auto Scaling group, the AWS
      CodeDeploy service is unable to request status updates from the Amazon EC2 API.

    4. The AWS CodeDeploy agent is not installed in the Amazon Machine Image (AMI)
      being used.

  2. If you specify a hook script in the ApplicationStop lifecycle event of an AWS CodeDeploy

    appspec.yml, will it run on the first deployment to your instance(s)?

    1. Yes

    2. No

    3. The ApplicationStop lifecycle event does not exist.

    4. It will run only if your application is running.


  3. If a single pipeline contains multiple sources, such as an AWS CodeCommit repository and
    an Amazon S3 archive, under what circumstances will the pipeline be triggered?

    1. When either a commit is pushed to the repository or the archive is updated, regardless
      of timing.

    2. When a commit is pushed to the repository and the archive is updated at the same time.

    3. When either a commit is pushed to the repository or the archive is updated, but not
      when both are updated at the same time.

    4. AWS CodePipeline does not support multiple sources in the same pipeline.


  4. If you want to implement a deployment pipeline that deploys both source files and large binary
    objects to instance(s), how would you best achieve this while taking cost into consideration?

    1. Store both the source files and binary objects in AWS CodeCommit.

    2. Build the binary objects into the AMI of the instance(s) being deployed. Store the
      source files in AWS CodeCommit.

    3. Store the source files in AWS CodeCommit. Store the binary objects in an Amazon S3
      archive.

      378 Chapter 7 Deployment as Code


    4. Store the source files in AWS CodeCommit. Store the binary objects on an Amazon
      Elastic Block Store (Amazon EBS) volume, taking snapshots of the volume whenever a
      new one needs to be created.

    5. Store the source files in AWS CodeCommit. Store the binary objects in Amazon S3 and
      access them from an Amazon CloudFront distribution.

  5. Your team is building a deployment pipeline to a sensitive application in your environment
    using AWS CodeDeploy. The application consists of an Amazon EC2 Auto Scaling group
    of instances behind an Elastic Load Balancing load balancer. The nature of the application

    requires 100 percent availability for both successful and failed deployments. The development
    team want to deploy changes multiple times per day.

    How would this be achieved at the lowest cost and with the fastest deployments?

    1. Rolling deployments with an additional batch

    2. Rolling deployments without an additional batch

    3. Blue/green deployments

    4. Immutable updates


  6. What would cause an access denied error when attempting to download an archive file
    from Amazon S3 during a pipeline execution?

    1. Insufficient user permissions for the user initiating the pipeline

    2. Insufficient user permissions for the user uploading the Amazon S3 archive

    3. Insufficient role permissions for the Amazon S3 service role

    4. Insufficient role permissions for the AWS CodePipeline service role


  7. How do you output build artifacts from AWS CodeBuild to AWS CodePipeline?

    1. Write the outputs to STDOUT from the build container.

    2. Specify artifact files in the buildspec.yml configuration file.

    3. Upload the files to Amazon S3 from the build environment.

    4. Output artifacts are not supported with AWS CodeBuild.


  8. What would be the most secure means of providing secrets to an AWS CodeBuild
    environment?

    1. Create a custom build environment with the secrets included in configuration files.

    2. Upload the secrets to Amazon S3 and download the object when the build job runs.
      Protect the bucket and object with an appropriate bucket policy.

    3. Save the secrets in AWS Systems Manager Parameter Store and query them as needed.
      Encrypt the secrets with an AWS Key Management Service (AWS KMS) key. Include
      appropriate AWS KMS permissions to your build environment’s IAM role.

    4. Include the secrets in the source repository or archive.


  9. In which of the pipeline actions can you execute AWS Lambda functions?

    1. Invoke

    2. Deploy

      Review Questions 379


    3. Build

    4. Approval

    5. Test


  10. In what ways can pipeline actions be ordered in a stage? (Select TWO.)

    1. Series

    2. Parallel

    3. Stages support only one action each

    4. First-in-first-out (FIFO)

    5. Last-in-first-out (LIFO)


  11. If you would like to delete an AWS CloudFormation stack before you deploy a new one in
    your pipeline, what would be the correct set of actions?

    1. One action that specifies “Create or update a stack.”

    2. Two actions: the first specifies “Create or update a stack,” and the second specifies
      “Delete a stack.”

    3. Three actions: the first specifies “Delete a stack,” the second specifies “Create or
      update a stack,” and the third specifies “Replace a failed stack.”

    4. Two actions: the first specifies “Delete a stack,” and the second specifies “Create or
      update a stack.”

  12. How can you connect to an AWS CodeCommit repository without Git credentials?

    1. It is not possible.

    2. HTTPS

    3. SSH

    4. AWS CodeCommit credential helper


  13. Of the following, which event cannot be used to generate notifications to an Amazon
    Simple Notification Service (SNS) topic from AWS CodeCommit without using a trigger?

    1. Pull Request Creation

    2. Commit Comments

    3. Commit Creation

    4. Pull Request Comments


  14. Which pipeline actions support AWS CodeBuild projects? (Select TWO.)

    1. Invoke

    2. Deploy

    3. Build

    4. Approval

    5. Test

      380 Chapter 7 Deployment as Code


  15. Can data passed to build projects using environment variables be encrypted or protected?

    1. Yes, this is supported natively by AWS CodeBuild.

    2. No, it is not supported.

    3. No, but this can be enabled in the console.

    4. No, but this can be supported using other AWS products and services.


  16. What is the only deployment type supported by on-premises instances?

    1. In-place

    2. Blue/green

    3. Immutable

    4. Progressive


  17. If your AWS CodeDeploy configuration includes creation of a file, nginx.conf, but the
    file already exists on the server (prior to the use of AWS CodeDeploy), what is the default
    behavior that will occur during deployment?

    1. The file will be replaced.

    2. The file will be renamed nginx.conf.bak, and the new file will be created.

    3. The deployment will fail.

    4. The deployment will continue, but the file will not be modified.


  18. How does AWS Lambda support in-place deployments?

    1. Function versions are overwritten during the deployment.

    2. New function versions are created, and then version numbers are switched.

    3. AWS Lambda does not support in-place deployments.

    4. Function aliases are overwritten during the deployment.


  19. What is the minimum number of stages required by a pipeline in AWS CodePipeline?

    1. 0

    2. 1

    3. 2

    4. 3


  20. If an instance is running low on storage, and you find that there are a large number of
    deployment revisions stored by AWS CodeDeploy, what can be done to free up this space
    permanently?

    1. Delete the old revisions.

    2. Add an additional Amazon EBS volume.

    3. Configure the AWS CodeDeploy agent to store fewer revisions.

    4. Delete all of the revisions, and push all new code.

440 Chapter 8 Infrastructure as Code


Review Questions

  1. Which of the AWS CloudFormation template sections is/are required?

    1. AWSTemplateFormatVersion

    2. Parameters

    3. Metadata

    4. Resources

    5. All of the above


  2. You are writing an AWS CloudFormation template and would like to create an output
    value corresponding to your application’s website URL. The application is composed of
    two application servers in a private subnet behind an Elastic Load Balancing load balancer.
    The application servers read from the Amazon Relational Database Service (Amazon RDS)
    database instance. The logical IDs of the instances are AppServerA and AppServerB. The
    logical IDs of the load balancer and database are AppLB and AppDB, respectively.

    "Outputs" : {

    "AppEndpoint" : {

    "Description" : "URL to access the application",
    "Value" : "Value to return"

    }

    }

    Which code correctly completes the previous output declaration?

    A. { "Fn::Join": [ "", [ https://, { "Ref": "AppLB" }, "/login.php" ] ] }

    1. { "Fn::Join": [ "", [ https://, { "Fn::GetAtt": [ "AppServerA",
      "PublicDNSName" ] }, "/login.php" ] ] }

    2. { "Fn::Join": [ "", [ https://, { "Ref": [ "AppLB", "DNSName" ] },
      "/login.php" ] ] }

    3. { "Fn::Join": [ "", [ https://, { "Fn::GetAtt": [ "AppDB", "Endpoint

      .Address" ] }, "/login.php" ] ] }

    4. { "Fn::Join": [ "", [ https://, { "Fn::GetAtt": [ "AppLB", "DNSName" ] },
      "/login.php" ] ] }

  3. An AWS CloudFormation template you have written uses a CreationPolicy to ensure
    that video transcoding instances launch and configure before the application server
    instances so that they are available before users are able to access the website. However,
    you are finding that the stack always reaches the creation policy’s timeout value before the
    transcoding instances complete setup.

    Why could this be? (Select THREE.)

    1. The user data script does not include a call to cfn-signal.

    2. The instance could not be launched because of account limits.

      Review Questions 441


    3. The user data script fails before reaching the cfn-signal step.

    4. The instance cannot connect to the AWS CloudFormation endpoint when calling

      cfn-signal.

  4. When you attempt to update an Amazon Relational Database Service (Amazon RDS)
    instance in your AWS CloudFormation stack, you experience a
    Resource failed to
    stabilize
    error, which causes the stack to roll back any changes you attempted.

    What might be the cause of this error, and how could it be resolved?

    1. The database is corrupted and cannot be updated. Take a snapshot of the database,
      and use it to create a replacement.

    2. The database took too long to update. Remove the database from the AWS
      CloudFormation stack by applying a
      DeletionPolicy of Retain, and manage the
      stack using the Amazon RDS console or AWS CLI.

    3. The database took too long to update, and the session credentials used by AWS
      CloudFormation timed out. Use a service role to perform the update.

    4. You have attempted to perform an update that is not supported by Amazon RDS.
      Review the specification documentation and attempt a valid update.

    5. I/O has not been halted on the database before performing the update, and AWS
      CloudFormation timed out waiting for database transactions to halt. Temporarily
      block I/O and attempt the update again.

  5. A custom resource associated with AWS Lambda in your stack creates successfully;
    however, it attempts to update the resource result in the failure message
    Custom Resource
    failed to stabilize in the expected time
    . After you add a service role to extend the
    timeout duration, the issue still persists.


    What may also be the cause of this error?

    1. The custom resource defined a function for handling the CREATE action but did not do
      the same for the
      UPDATE action; thus, a success or failure signal was not sent to AWS
      CloudFormation.

    2. The service role does not have appropriate permissions to invoke the custom resource
      function.

    3. The custom resource function no longer exists.

    4. All of the above.


  6. After you deploy an AWS Serverless Application Model (AWS SAM) template to AWS
    CloudFormation, can you view the original template? Why or why not?

    1. No, after the template is submitted and the AWS::Serverless transform is executed,
      an AWS CloudFormation-supported template is generated.

    2. Yes, the original template is saved and accessible using the get-stack-template AWS
      CLI command.

    3. Yes, it is saved in the Amazon Simple Storage Service (Amazon S3) bucket created by
      AWS CloudFormation for AWS SAM templates.

    4. No, AWS CloudFormation does not retain processed templates.

      442 Chapter 8 Infrastructure as Code


  7. When defining an AWS Serverless Application Model (AWS SAM) template, how can you
    create an Amazon API Gateway as part of the stack?

    1. By defining an AWS::ApiGateway::RestApi resource and any associated

      AWS::ApiGateway::Method resources

    2. One will be created automatically for you whenever AWS::Serverless::Function

      resources are declared with one or more Events.

    3. By defining an AWS::Serverless::Api and providing an inline or external Swagger
      definition

    4. AWS::ApiGateway::RestApi resources are not supported in AWS SAM templates.

    5. A, B, and C


  8. Which of these helper scripts performs updates to OS configuration when an AWS
    CloudFormation stack updates?

    1. cfn-hup

    2. cfn-init

    3. cfn-signal

    4. cfn-update

  9. Which of these options allows you to specify a required number of signals to mark the
    resource as
    CREATE_COMPLETE?

    1. Wait Condition

    2. Wait Condition Handler

    3. CreationPolicy

    4. WaitCount

  10. How would you preview the changes a stack update will make without affecting any
    resources in your account?

    1. Create a change set.

    2. Perform the stack update, and then manually roll back.

    3. Perform the stack update on a test stack.

    4. Do a manual diff of both templates.


  11. How would you access a property of a resource created in a nested stack?

    1. This cannot be done.

    2. In the child stack, declare the resource property as a stack output. In the parent
      stack, use
      Fn::GetAtt and pass in two parameters, the child stack logical ID and
      Outputs.NestedStackOutputName.

    3. In the child stack, export the resource property. In the parent stack, import the
      exported value.

    4. Use the cross-stack references.

      Review Questions 443


  12. By default, with what permissions will AWS CloudFormation stack operations perform?

    1. Full administrator

    2. The permissions of the user performing the operation

    3. The AWS CloudFormation service role

    4. The AWS CloudFormation does not use permissions


  13. An AWS CloudFormation template declares two resources: an AWS Lambda function and
    an Amazon DynamoDB table. The function code is declared inline as part of the template
    and references the table. In what order will AWS CloudFormation provision the two
    resources?

    1. Amazon DynamoDB table, AWS Lambda function

    2. AWS Lambda function, Amazon DynamoDB table

    3. This cannot be determined ahead of time.

    4. This depends on the template.


  14. Which occurs during a replacing update?

    1. The resource becomes unavailable.

    2. The resource physical ID changes.

    3. A new resource is created.

    4. The original resource is deleted during the cleanup phase.

    5. All of the above


  15. Which of the update types results in resource downtime? (Select TWO.)

    1. Update with No Interruption

    2. Update with Some Interruption

    3. Replacing Update

    4. Update with No Data

    5. Static Update


  16. What must occur before a stack that exports an output can be deleted?

    1. Any stacks importing the exported value must remove the import.

    2. The export must be removed from the stack.

    3. Nothing is required.

    4. The stack must be deleted.


  17. If an AWS CloudFormation stack is in UPDATE_IN_PROGRESS state, which of the states are
    possible transitions? (Select THREE.)

    1. UPDATE_ROLLBACK_COMPLETE

    2. UPDATE_FAILED

    3. UPDATE_ROLLBACK_FAILED

    4. UPDATE_COMPLETE

    5. UPDATE_COMPLETE_CLEANUP_IN_PROGRESS

      444 Chapter 8 Infrastructure as Code


  18. What does it mean when an AWS CloudFormation stack is in

    UPDATE_COMPLETE_CLEANUP_IN_PROGRESS state?

    1. The stack has failed to update, and it is removing newly created resources.

    2. The stack has successfully updated, and it is removing old resources.

    3. The stack has successfully updated, and it is removing new resources.

    4. The stack has failed to update, and it is removing old resources.


  19. Which of the formats are valid for an AWS CloudFormation template? (Select TWO.)

    1. YAML

    2. XML

    3. JSON

    4. Markdown

    5. LaTeX


  20. What are some challenges to consider when using the AWS Command Line Interface
    (AWS CLI) or AWS software development kits (AWS SDKs) to provision and manage
    infrastructure compared to AWS CloudFormation?

    1. Reduction of human error

    2. Repeatable infrastructure

    3. Reduced IAM permissions requirements

    4. Versionable infrastructure

    5. All of the above


  21. What does a service token represent in a custom resource declaration?

    1. The AWS service that receives the request

    2. The Amazon Simple Notification Service (Amazon SNS) or AWS Lambda resource
      Amazon Resource Name (ARN) that receives the request

    3. The on-premises server IP address that receives the request

    4. The type of action to take

    5. The commands to execute for the custom resource


  22. You are creating a custom resource associated with AWS Lambda that will execute several
    database functions in an Amazon Relational Database Service (Amazon RDS) database
    instance. As part of this, the functions will return data you would like to use in other
    resources declared in your AWS CloudFormation template.

    How would you best pass this data to the other resources declared in the template?

    1. Store the data in a JSON file in an Amazon Simple Storage Service (Amazon S3)
      bucket, and use the AWS Command Line Interface (AWS CLI) to download the object.

    2. Store the output data in AWS Systems Manager Parameter Store, and query the
      parameter store using the AWS CLI.

    3. Use custom resource outputs to declare the returned data as resource properties. Then,
      query the properties using the
      Fn::GetAtt intrinsic function.

    4. This cannot be accomplished.

Review Questions 491


Review Questions

  1. Which of the following AWS OpsWorks Stacks limits cannot be raised?

    1. Maximum stacks per account, per region

    2. Maximum layers per stack

    3. Maximum instances per layer

    4. Maximum apps per stack

    5. None of the above


  2. After submitting changes to your cookbook repository, you notice that executing cook-
    books on your AWS OpsWorks instances does not result in any changes taking place, even
    though the logs show successful Chef runs.

    What could be the cause of this?

    1. The instances are unable to connect to the cookbook repository or archive location
      because of networking or permissions errors.

    2. The AWS OpsWorks Stacks agent running on the instance is enforcing cookbook
      caching, resulting in cached copies being used instead of the new versions.

    3. The version of the cookbook specified in the recipe list for the lifecycle event is
      incorrect.

    4. The custom cookbooks have not yet been downloaded to the instances.


  3. When will an AWS OpsWorks Stacks instance register and deregister from an Elastic Load
    Balancing load balancer associated with the layer?

    1. Instances are registered or deregistered manually only.

    2. Instances will be registered when they enter an online state and are deregistered when
      they leave an online state.

    3. As an administrator, you are responsible for including the registration and deregistration
      within your Chef recipes and assigning the recipes to the appropriate lifecycle event.

    4. Instances are registered when they are created and not deregistered until they are
      terminated.

  4. You have an Amazon ECS cluster that runs on a single service with one task. The cluster
    currently contains enough instances to support the containers you define in your task, with
    no additional compute resources to spare (other than those needed by the underlying OS
    and Docker). Currently the service is configured with a maximum in-service percentage of
    100 percent and a minimum of 100 percent. When you attempt to update the service, noth-
    ing happens for an extended period of time, as the replacement task appears to be stuck as
    it launches.

    How would you resolve this? (Select TWO.)

    1. The current configuration prevents new tasks from starting because of insufficient
      resources. Add enough instances to the cluster to support the additional task temporarily.

    2. The current configuration prevents new tasks from starting because of insufficient
      resources. Modify the configuration to have a maximum in-service percentage of 200
      percent and a minimum of 0 percent.

      492 Chapter 9 Configuration as Code


    3. Configure the cluster to leverage an AWS Auto Scaling group and scale out additional
      cluster instances when CPU Utilization is over 90 percent.

    4. Submit a new update to replace the one that appears to be failing.


  5. Which party is responsible for patching and maintaining underlying clusters when you use
    the AWS Fargate launch type?

    1. The customer

    2. Amazon Web Services (AWS)

    3. Docker

    4. Independent software vendors


  6. Why should instances in a single AWS OpsWorks Stacks layer have the same functionality
    and purpose?

    1. Because all instances in a layer run the same recipes

    2. To keep the console clean

    3. To stop and start at the same time

    4. To all run configure lifecycle events at the same time


  7. Where do instances in an AWS OpsWorks Stacks stack download custom cookbooks?

    1. The Chef Server

    2. They are included in the Amazon Machine Image (AMI).

    3. The custom cookbook repository

    4. Amazon Elastic Container Service (Amazon ECS)


  8. How would you migrate an Amazon Relational Database Service (Amazon RDS) layer
    between two stacks in the same region?

    1. Supply the connection information to the second stack as custom JSON to ensure that
      the instances can connect. Remove the Amazon RDS layer from the first stack. Add the
      Amazon RDS layer to the second stack. Remove the connection custom JSON.

    2. Add the Amazon RDS layer to the second stack and remove it from the first.

    3. Create a new database instance, migrate data to the new instance, and associate it with
      the second stack using an Amazon RDS layer.

    4. This is not possible.


  9. Which AWS OpsWorks Stacks instance type would you use for predictable increases in traf-
    fic or workload for a stack?

    A. 24/7

    1. Load-based

    2. Time-based

    3. On demand

      Review Questions 493


  10. Which AWS OpsWorks Stacks instance type would you use for random, unpredictable
    increases in traffic or workload for a stack?

    A. 24/7

      1. Load-based

      2. Time-based

      3. Spot


  11. What component is responsible for stopping and starting containers on an Amazon Elastic
    Container Service (Amazon ECS) cluster instance?

    1. The Amazon ECS agent running on the instance

    2. The Amazon ECS service role

    3. AWS Systems Manager

    4. The customer


  12. What is Service-Oriented Architecture (SOA)?

    1. The use of multiple AWS services to decouple infrastructure components and achieve
      high availability

    2. A software design practice where applications divide into discrete components (ser-
      vices) that communicate with each other in such a way that individual services do not
      rely on one another for their successful operation

    3. Involves multiple teams to develop application components with no knowledge of other
      teams and their components

    4. Leasing services from different vendors instead of doing internal development


  13. How many containers can a single task definition describe?

    1. 1

    2. Up to 3

    3. Up to 5

    4. Up to 10


  14. You have a web proxy application that you would like to deploy in containers with the use
    of Amazon Elastic Container Service (Amazon ECS). Typically, your application binds to
    port 80 on the instance on which it runs. How can you use an application load balancer to
    run more than one proxy container on each instance in your cluster?

    1. Do not configure the container to bind to port 80. Instead, configure Application Load
      Balancing (ALB) with dynamic host port mapping so that a random port is bound.
      The ALB will route traffic coming in on port 80 to the port on which the container is
      listening.

    2. Configure a Port Address Translation (PAT) instance in Amazon Virtual Private Cloud
      (Amazon VPC).

    3. If the container binds to a specific port, only one copy can launch per instance.

    4. Configure a classic load balancer to use dynamic host port mapping.

      494 Chapter 9 Configuration as Code


  15. Which Amazon Elastic Container Service (Amazon ECS) task placement policy ensures that
    tasks are distributed as much as possible in a single cluster?

    1. Spread

    2. Binpack

    3. Random

    4. Least Cost

Review Questions 517


Review Questions

  1. You need to grant a user, who is outside your AWS account, access to an object in an
    Amazon Simple Storage Service (Amazon S3) bucket. Which is the best way to provide
    access?

    1. Create a role and assign that role to the user.

    2. Create a user ID within Identity and Access Management (IAM) and assign the user ID
      a policy that allows access.

    3. Create a new AWS account, assign that user to the account, and then give the account
      cross-account access.

    4. Have the user create a user ID using a third-party identity provider (IdP), and based on
      that user ID, assign a policy that permits access.

  2. Which of the following is the purpose of an identity provider (IdP)?

    1. To control access to applications

    2. To control access to the AWS infrastructure

    3. To minimize the opportunity to assign the incorrect policy

    4. To answer the question “Who are you?”


  3. Which of the following is the best way to minimize misuse of AWS credentials?

    1. Set up multi-factor authentication (MFA).

    2. Embed the credentials in the bastion host and control access to the bastion host.

    3. Put a condition on all of your policies that allows execution only from your corporate
      IP range.

    4. Make sure that you have a limited number of credentials and limit the number of
      people that can use them.

  4. Which of the following is not a valid identity provider (IdP) for Amazon Cognito?

    1. Google

    2. Microsoft Active Directory

    3. Your own identity store

    4. A Security Assertion Markup Language (SAML) 1.0–based IdP


  5. Which of the following is one benefit of using AWS as an identity provider (IdP) to access
    non-AWS resources?

    1. AWS cannot be used as an IdP for non-AWS services.

    2. Using AWS as an IdP allows you to use Amazon CloudWatch to monitor activity.

    3. Using AWS as an IdP allows you to use AWS CloudTrail to audit who is using the
      service.

    4. Using AWS as an IdP allows you to assign policies to non-AWS resources.

      518 Chapter 10 Authentication and Authorization


  6. Which of the following are benefits from using the Active Directory Connector (AD
    Connector)? (Select TWO.)

    1. Easy setup

    2. Ability to connect to multiple Active Directory domains with a single connection

    3. Ability to configure changes to Active Directory on your existing Active Directory
      console

    4. Ability to support authentication to non-AWS services


  7. Which of the following is a prerequisite for using AWS Single Sign-On (AWS SSO)?

    1. Set up AWS Organizations and enable all features.

    2. Make sure that your identity provider (IdP) is Security Assertion Markup Language
      (SAML) 2.0 compatible.

    3. Deploy AWS Simple Active Directory (Simple AD).

    4. Deploy Amazon Cognito.


  8. AWS Security Token Service (AWS STS) supports a number of different tokens.
    Which token would you use to establish a longer-term session?

    1. AssumeRole

    2. GetUserToken

    3. GetFederationToken

    4. GetSessionToken

  9. Which of the following is not a service that AWS Managed Microsoft AD provides?

    1. Daily snapshots

    2. Ability to manage the Amazon Elastic Compute Cloud (Amazon EC2) instances that
      AWS Managed Microsoft AD is running on

    3. Monitoring

    4. Ability to sync with on-premises Active Directory


  10. You are using an existing RADIUS-based multi-factor authentication (MFA) infrastructure.
    Which AWS service is your best choice?

    1. Active Directory Connector (AD Connector)

    2. AWS Managed Microsoft AD

    3. Simple Active Directory (Simple AD)

    4. No AWS service would be suitable.

582 Chapter 11 Refactor to Microservices


Review Questions

  1. When a user submits a build into the build system, you want to send an email to the user,
    acknowledging that you have received the build request, and start the build. To perform
    these actions at the same time, what type of a state should you use?

    1. Choice

    2. Parallel

    3. Task

    4. Wait

  2. Suppose that a queue has no consumers. The queue has a maximum message retention
    period of 14 days. After 14 days, what happens?

    1. After 14 days, the messages are deleted and move to the dead-letter queue.

    2. After 14 days, the messages are deleted and do not move to the dead-letter queue.

    3. After 14 days, the messages are not deleted.

    4. After 14 days, the messages become invisible.


  3. What is size of an Amazon Simple Queue Service (Amazon SQS) message?

    1. 256 KB

    2. 128 KB

    3. 1 MB

    4. 5 MB


  4. You want to send a 1 GB file through Amazon Simple Queue Service (Amazon SQS). How
    can you do this?

    1. This is not possible.

    2. Save the file in Amazon Simple Storage Service (Amazon S3) and then send a link to
      the file on Amazon SQS.

    3. Use AWS Lambda to push the file.

    4. Bypass the log server so that it does not get overloaded.


  5. You want to design an application that sends a status email every morning to the system
    administrators. Which option will work?

    1. Create an Amazon SQS queue. Subscribe all the administrators to this queue. Set up an
      Amazon CloudWatch event to send a message on a daily cron schedule into the Ama-
      zon SQS queue.

    2. Create an Amazon SNS topic. Subscribe all the administrators to this topic. Set up an
      Amazon CloudWatch event to send a message on a daily cron schedule to this topic.

      Review Questions 583


    3. Create an Amazon SNS topic. Subscribe all the administrators to this topic. Set up
      an Amazon CloudWatch event to send a message on a daily cron schedule to an AWS
      Lambda function that generates a summary and publishes it to this topic.

    4. Create an AWS Lambda function that sends out an email to the administrators every
      day directly with SMTP.

  6. What is the size of an Amazon Simple Notification Service (Amazon SNS) message?

    1. 256 KB

    2. 128 KB

    3. 1 MB

    4. 5 MB


  7. You have an Amazon Kinesis data stream with one shard and one producer. How many
    consumer applications can you consume from the stream?

    1. One consumer

    2. Two consumers

    3. Limitless number of consumers

    4. Limitless number of consumers as long as all consumers consume fewer than 2 MB and
      five transactions per second

  8. A company has a website that sells books. It wants to find out which book is selling the
    most in real time. Every time a book is purchased, it produces an event. What service can
    you use to provide real-time analytics on the sales with a latency of 30 seconds?

    1. Amazon Simple Queue Service (Amazon SQS)

    2. Amazon Simple Notification Service (Amazon SNS)

    3. Amazon Kinesis Data Streams

    4. Amazon Kinesis Data Firehose


  9. A company sells books in the 50 states of the United States. It publishes each sale into an
    Amazon Kinesis data stream with two shards. For the partition key, it uses the two-letter
    abbreviation of the state, such as WA for Washington, WY for Wyoming, and so on. Which
    of the following statements is true?

    1. The records for Washington are all on the same shard.

    2. The records for both Washington and Wyoming are on the same shard.

    3. The records for Washington are on a different shard than the records for Wyoming.

    4. The records for Washington are evenly distributed between the two shards.

      584 Chapter 11 Refactor to Microservices


  10. What are the options for Amazon Kinesis Data Streams producers?

    1. Amazon Kinesis Agent

    2. Amazon Kinesis Data Steams API

    3. Amazon Kinesis Producer Library (KPL)

    4. Open-Source Tools

    5. All of these are valid options.

618 Chapter 12 Serverless Compute


Review Questions

  1. A company currently uses a serverless web application stack, which consists of Amazon API
    Gateway, Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, and AWS
    Lambda. They would like to make improvements to their AWS Lambda functions but do
    not want to impact their production functions.

    How can they accomplish this?

    1. Create new AWS Lambda functions with a different name, and update resources to
      point to the new functions when they are ready to test.

    2. Copy their AWS Lambda function to a new region where they can update their
      resources to the new region when ready.

    3. Create a new AWS account, and re-create all their serverless infrastructure for their
      application testing.

    4. Publish the current version of their AWS Lambda function, and create an alias as PROD.
      Then, assign
      PROD to the current version number, update resources with the PROD alias
      ARN, and create a new version of the updated AWS Lambda function and assign an
      alias of
      $DEV.

  2. What is the maximum amount of memory that you can assign an AWS Lambda function?

    1. AWS runs the AWS Lambda function; it is a managed service, so you do not need to
      configure memory settings.

    2. 3008 MB

    3. 1000 MB

    4. 9008 MB


  3. What is the default timeout value for an AWS Lambda function?

    1. 3 seconds

    2. 10 seconds

    3. 15 seconds

    4. 25 seconds


  4. A company uses a third-party service to send checks to its employees for payroll. The com-
    pany is required to send the third-party service a JSON file with the person’s name and
    the check amount. The company’s internal payroll application supports exporting only to
    CSVs, and it currently has
    cron jobs set up on their internal network to process these files.
    The server that is processing the data is aging, and the company is concerned that it might
    fail in the future. It is also looking to have the AWS services perform the payroll function.

    What would be the best serverless option to accomplish this goal?

    1. Create an Amazon Elastic Compute Cloud (Amazon EC2) and the necessary cron job
      to process the file from CSV to JSON.

    2. Use AWS Import/Export to create a virtual machine (VM) image of the on-premises
      server and upload the Amazon Machine Images (AMI) to AWS.

      Review Questions 619


    3. Use AWS Lambda to process the file with Amazon Simple Storage Service
      (Amazon S3).

    4. There is no way to process this file with AWS.


  5. What is the maximum execution time allowed for an AWS Lambda function?

    1. 60 seconds

    2. 120 seconds

    3. 230 seconds

    4. 300 seconds


  6. Which language is not supported for AWS Lambda functions?

    1. Ruby

    2. Python 3.6

    3. Node.js

    4. C# (.NET Core)


  7. How can you increase the limit of AWS Lambda concurrent executions?

    1. Use the Support Center page in the AWS Management Console to open a case and send
      a Server Limit Increase request.

    2. AWS Lambda does not have any limits for concurrent executions.

    3. Send an email to limits@amazon.com with the subject “AWS Lambda Increase.”

    4. You cannot increase concurrent executions for AWS Lambda.


  8. A company is receiving permission denied after its AWS Lambda function is invoked and
    executes and has a valid trust policy. After investigating, the company realizes that its AWS
    Lambda function does not have access to download objects from Amazon Simple Storage
    Service (Amazon S3).

    Which type of policy do you need to correct to give access to the AWS Lambda function?

    1. Function policy

    2. Trust policy

    3. Execution policy

    4. None of the above


  9. A company wants to be able to send event payloads to an Amazon Simple Queue Service
    (Amazon SQS) queue if the AWS Lambda function fails.

    Which of the following configuration options does the company need to be able to do this
    in AWS Lambda?

    1. Enable a dead-letter queue.

    2. Define an Amazon Virtual Private Cloud (Amazon VPC) network.

    3. Enable concurrency.

    4. AWS Lambda does not support such a feature.

      620 Chapter 12 Serverless Compute


  10. A company wants to be able to pass configuration settings as variables to their AWS
    Lambda function at execution time.

    Which feature should the company use?

    1. Dead-letter queues

    2. AWS Lambda does not support such a feature.

    3. Environment variables

    4. None of the above

660 Chapter 13 Serverless Applications


Review Questions

  1. Which templating engine can you use to deploy infrastructure inside of AWS that is built
    for serverless technologies?

    1. AWS CloudFormation

    2. Ansible

    3. AWS OpsWorks for Automate Operations

    4. AWS Serverless Application Model (AWS SAM)


  2. What option do you need to enable to call Amazon API Gateway from another server or
    service?

    1. You do not need to enable any options. Amazon API Gateway is ready to use as soon
      as it’s deployed.

    2. Enable cross-origin resource sharing (CORS).

    3. Deploy a stage.

    4. Deploy a resource.


  3. A company is considering moving to the AWS serverless stack. What are two benefits of
    serverless stacks? (Select TWO.)

    1. No server management

    2. It costs less than Amazon Elastic Compute Cloud (Amazon EC2).

    3. Flexible scaling

    4. There are no benefits to serverless stacks.


  4. Can you create HTTP endpoints with Amazon API Gateway?

    1. Yes. You can create HTTP endpoints with Amazon API Gateway.

    2. No. API Gateway creates FTP endpoints.

    3. No. API Gateway only supports SSH endpoints.

    4. No. API Gateway is a secure service that only supports HTTPS.


  5. A company is moving to a serverless application, using Amazon Simple Storage Service
    (Amazon S3), AWS Lambda, and Amazon DynamoDB. They are currently using Amazon
    CloudFront for their content delivery network (CDN) network. They are concerned that
    they can no longer use Amazon CloudFront because they will have no Amazon Elastic
    Compute Cloud (Amazon EC2) instances running. Is their concern valid?

    1. Their concerns are valid: Amazon CloudFront only supports Amazon EC2.

    2. Their concerns are valid because all serverless applications are fully dynamic and
      contain no static information; thus, Amazon CloudFront does not support serverless
      applications.

    3. Their concerns are not valid. Amazon CloudFront supports serverless applications

    4. Their concerns are valid. Amazon CloudFront does support serverless applications;
      however, it does not support Amazon S3.

      Review Questions 661


  6. Amazon Cognito Mobile SDK does not support which language/platform?

    1. iOS

    2. Android

    3. JavaScript

    4. All of these languages/platform are supported.


  7. Does Amazon Cognito support Short Message Service (SMS)–based multi-factor authenti-
    cation (MFA)?

    1. No. Amazon Cognito does not support SMS-based MFA.

    2. No. Amazon Cognito does not support SMS-based MFA; however, it does support
      MFA.

    3. Yes. Amazon Cognito does support SMS-based MFA.

    4. None of the above.


  8. Does Amazon Cognito support device tracking and remembering?

    1. Amazon Cognito does not support device tracking and remembering.

    2. Amazon Cognito supports device tracking but not remembering.

    3. Amazon Cognito supports device remembering but not tracking.

    4. Amazon Cognito supports device remembering and tracking.


  9. What is the property name that you use to connect an AWS Lambda function to the Amazon
    API Gateway inside of an AWS Serverless Application Model (AWS SAM) template?

    1. events

    2. handler

    3. context

    4. runtime

  10. A company wants to use a serverless application to run its dynamic website that is currently
    running on Amazon Elastic Compute Cloud (Amazon EC2) and Elastic Load Balancing
    (ELB). Currently, the application uses HTML, CSS, and React, and the database is a
    NoSQL flavor. You are the advisor—is this possible?

    1. No. This is not possible, because there is no way to run React in AWS. React is a
      Facebook technology.

    2. No. This is not possible, because you need an Amazon EC2 to run the web server.

    3. No. This is not possible, because there is no way to load balance a serverless
      application.

    4. Yes. This is possible; however, some refactoring will be required.

Review Questions 793


Review Questions

  1. Which of the following is the maximum Amazon DynamoDB item size limit?

    1. 512 KB

    2. 400 KB

    3. 4 KB

      D. 1,024 KB


  2. Which of the following is true when using Amazon Simple Storage Service (Amazon S3)?

    1. Versioning is enabled on a bucket by default.

    2. The largest size of an object in an Amazon S3 bucket is 5 GB.

    3. Bucket names must be globally unique.

    4. Bucket names can be changed after they are created.


  3. Which of the following is not a deciding factor when choosing an AWS Region for your
    bucket?

    1. Latency

    2. Storage class

    3. Cost

    4. Regulatory requirements


  4. Which of the following features can you use to protect your data at rest within Amazon
    DynamoDB?

    1. Fine-grained access controls

    2. Transport Layer Security (TLS) connections

    3. Server-side encryption provided by the DynamoDB service

    4. Client-side encryption


  5. You store your company’s critical data in Amazon Simple Storage Service (Amazon S3).
    The data must be protected against accidental deletions or overwrites. How can this be
    achieved?

    1. Use a lifecycle policy to move the data to Amazon S3 Glacier.

    2. Enable MFA Delete on the bucket.

    3. Use a path-style URL.

    4. Enable versioning on the bucket.


  6. How does Amazon Simple Storage Service (Amazon S3) object storage differ from block
    and file storage? (Select TWO.)

    1. Amazon S3 stores data in fixed blocks.

    2. Objects can be any size.

    3. Objects are stored in buckets.

    4. Objects contain both data and metadata.

      794 Chapter 14 Stateless Application Patterns


  7. What is the lifetime of data in an Amazon DynamoDB stream?

    1. 14 days

    2. 12 hours

    3. 24 hours

    4. 4 days


  8. How many times does each stream record in Amazon DynamoDB Streams appear in the
    stream?

    1. Twice

    2. Once

    3. Three times

    4. This value can be configured.


  9. Versioning is a means of keeping multiple variants of an object in the same bucket. You
    can use versioning to preserve, retrieve, and restore every version of every object stored in
    your Amazon S3 bucket. With versioning, you can easily recover from both unintended
    user actions and application failures. Which of the following is
    not a versioning state of a
    bucket?

    1. Versioning paused

    2. Versioning disabled

    3. Versioning suspended

    4. Versioning enabled


  10. Your team has built an application as a document management system that maintains meta-
    data on millions of documents in a DynamoDB table. When a document is retrieved, you
    want to display the metadata beside the document. Which DynamoDB operation can you
    use to retrieve metadata attributes from a table?

    1. QueryTable

    2. UpdateTable

    3. Search

    4. Scan

  11. Which of the following objects are good candidates to store in a cache? (Select THREE.)

    1. Session state

    2. Shopping cart

    3. Product catalog

    4. Bank account balance


  12. Which of the following cache engines does Amazon ElastiCache support? (Select TWO.)

    1. Redis

    2. MySQL

    3. Couchbase

    4. Memcached

      Review Questions 795


  13. How many nodes can you add to an Amazon ElastiCache cluster that is running Redis?

    A. 100

    1. 5

    2. 20

    3. 1


  14. What feature does Amazon ElastiCache provide?

    1. A highly available and fast indexing service for querying

    2. An Amazon Elastic Compute Cloud (Amazon EC2) instance with a large amount of
      memory and CPU

    3. A managed in-memory caching service

    4. An Amazon EC2 instance with Redis and Memcached already installed


  15. When designing a highly available web solution using stateless web servers, which services
    are suitable for storing session-state data? (Select THREE.)

    1. Amazon CloudFront

    2. Amazon DynamoDB

    3. Amazon CloudWatch

    4. Amazon Elastic File System (Amazon EFS)

    5. Amazon ElastiCache

    6. Amazon Simple Queue Service (Amazon SQS)


  16. Which AWS database service is best suited for nonrelational databases?

    1. Amazon Simple Storage Service Glacier (Amazon S3 Glacier)

    2. Amazon Relational Database Service (Amazon RDS)

    3. Amazon DynamoDB

    4. Amazon Redshift


  17. Which of the following statements about Amazon DynamoDB table is true?

    1. Only one local secondary index is allowed per table.

    2. You can create global secondary indexes only when you are creating the table.

    3. You can have only one global secondary index.

    4. You can create local secondary indexes only when you are creating the table.

Review Questions 829


Review Questions

  1. You are required to set up dynamic scaling using Amazon CloudWatch alarms.

    Which of the following metrics could you monitor to trigger Auto Scaling events to scale
    out and scale in your instances?

    1. High CPU utilization to trigger scale-in action, and low CPU utilization to trigger
      scale-out action

    2. High CPU utilization to trigger scale-out action, and low CPU utilization to trigger
      scale-in action

    3. High latency to trigger a scale-in action, and low latency to trigger a scale-out action

    4. None of the above


  2. What is the length of time that metrics are stored for a data point with a period of
    300 seconds (5 minutes) in Amazon CloudWatch?

    1. The data point is stored for 3 hours.

    2. The data point is stored for 15 days.

    3. The data point is stored for 30 days.

    4. The data point is stored for 63 days.

    5. The data point is stored for 455 days (15 months).


  3. Which of the following does an AWS CloudTrail event not provide?

    1. Who made the request

    2. When the request was made

    3. What request is being made

    4. Why the request was made

    5. Which resource was acted on


  4. You must set up centralized logging for an application and create a cost-effective way to
    archive logs for compliance purposes.

    Which is the best solution?

    1. Install the Amazon CloudWatch agent on your servers to ingest the logs and store them
      indefinitely.

    2. Configure Amazon CloudWatch to ingest logs from your application servers.

    3. Install the Amazon CloudWatch agent on your servers to ingest the logs and set a new
      retention period for logs with regular exports to Amazon S3 for archival.

    4. None of the above.

      830 Chapter 15 Monitoring and Troubleshooting


  5. Which of the following options allow logs and metrics to be ingested into Amazon
    CloudWatch? (Select THREE.)

    1. Install the Amazon CloudWatch agent and configure it to ingest logs.

    2. Execute API operations to push metrics to Amazon CloudWatch.

    3. Configure Amazon CloudWatch to pull logs from servers.

    4. Use the AWS CLI to push metrics to Amazon CloudWatch.


  6. The following are Apache HTTP access logs.

    Which filter pattern would select events matching 404 errors?

    127.0.0.1 - - [24/Sep/2013:11:49:52 -0700] "GET /index.html HTTP/1.1" 404 287

    127.0.0.1 - - [24/Sep/2013:11:49:52 -0700] "GET /index.html HTTP/1.1" 404 287

    127.0.0.1 - - [24/Sep/2013:11:50:51 -0700] "GET /~test/ HTTP/1.1" 200 3

    127.0.0.1 - - [24/Sep/2013:11:50:51 -0700] "GET /favicon.ico HTTP/1.1" 404 308

    127.0.0.1 - - [24/Sep/2013:11:50:51 -0700] "GET /favicon.ico HTTP/1.1" 404 308

    127.0.0.1 - - [24/Sep/2013:11:51:34 -0700] "GET /~test/index.html HTTP/1.1" 200 3

    1. 4xx

      B. 400

      C. 404

      D. None of the above


  7. You build an application and enable AWS X-Ray tracing. You analyze the service graph and
    determine that the application requests to Amazon DynamoDB are not performing well and
    a majority of the issues are purple.

    What kind of problem is your application experiencing?

    1. Throttling

    2. Error

    3. Faults

    4. OK


  8. Which AWS service enables you to monitor resources and gather statistics, such as CPU
    utilization, from a single “pane of glass” interface?

    1. AWS CloudTrail logs

    2. Amazon CloudWatch alarms

    3. Amazon CloudWatch dashboards

    4. Amazon CloudWatch Logs


  9. By default, what is the number of days of AWS account activity that you can view, search,
    and download from the AWS CloudTrail event history?

    1. 30 days

    2. 60 days

    3. 75 days

    4. 90 days

      Review Questions 831


  10. Which of the following is not able to access AWS CloudTrail data?

    1. AWS CLI

    2. AWS Management Console

    3. AWS CloudTrail API

    4. None of the above


  11. In AWS CloudTrail, which of the following are management events? (Select TWO.)

    1. Adding a row to an Amazon DynamoDB table

    2. Modifying an Amazon S3 bucket policy

    3. Uploading an object to an Amazon S3 bucket

    4. Creating an Amazon Relational Database Service (Amazon RDS) database instance

    5. Sending a notification to Amazon Simple Notification Service (Amazon SNS)


  12. Suppose that you have a custom web application running on an Amazon Elastic Compute
    Cloud (Amazon EC2) instance.

    What steps are needed to configure this instance to send custom application logs to Amazon
    CloudWatch Logs? (Select THREE.)

    1. Install the Amazon CloudWatch Logs agent.

    2. Attach an Elastic IP address to your Amazon EC2 instance.

    3. Configure the agent to send specific logs.

    4. Start the agent.

    5. Install the AWS Systems Manager agent.


  13. Which of the following are not supported Amazon CloudWatch alarm actions?

    1. AWS Lambda functions

    2. Amazon Simple Notification Service (Amazon SNS) topics

    3. Amazon Elastic Compute Cloud (Amazon EC2) actions

    4. EC2 Auto Scaling actions


  14. Which of the following Amazon Elastic Compute Cloud (Amazon EC2) metrics is not
    directly available through Amazon CloudWatch metrics?

    1. CPU utilization

    2. Network traffic in/out

    3. Disk I/O

    4. Memory (RAM) utilization


  15. Which of the following is the correct Amazon CloudWatch metric namespace for Amazon
    Elastic Compute Cloud (Amazon EC2) instances?

    1. AWS/EC2

    2. Amazon/EC2

    3. AWS/EC2Instance

    4. Amazon/EC2Instance

Review Questions 881


Review Questions

  1. You are developing an application that will run across dozens of instances. It uses
    some components from a legacy application that requires some configuration files to
    be copied from a central location and held on a volume local to each of the instances.
    You plan to modify your application with a new component in the future that will hold
    this configuration in Amazon DynamoDB. Which storage option should you use in the

    interim to provide the lowest cost and the lowest latency for your application to access the
    configuration files?

    1. Amazon S3

    2. Amazon EBS

    3. Amazon EFS

    4. Amazon EC2 instance store


  2. Similar to SQL, Amazon DynamoDB provides several operations for reading the data.
    Which operation is the most efficient way to retrieve a single item?

    1. Query

    2. Scan

    3. GetItem

    4. Join

  3. AWS Trusted Advisor offers a rich set of best practice checks and recommendations across
    five categories: cost optimization, security, fault tolerance, performance, and service limits.
    Which of the following checks is NOT under Cost and Performance categories?

    1. Amazon EBS Provisioned IOPS (SSD) volume attachment configuration

    2. Amazon CloudFront header forwarding and cache hit ratio

    3. Amazon EC2 Availability Zone balance

    4. Unassociated Elastic IP address


  4. Which of the following common partition schemas includes a partition key design that
    distributes I/O requests evenly across partitions and uses provisioned I/O capacity of an
    Amazon DynamoDB table efficiently?

    1. Status code, where there are only a few possible status codes

    2. User ID, where the application has many users

    3. Item creation date, rounded to the nearest time period

    4. Device ID, where even if there are many devices tracked, one is by far more popular
      than all the others

      882 Chapter 16 Optimization


  5. You are developing an application that consists of a set of Amazon EC2 instances hosting
    a web layer and a database hosting a MySQL instance. You are required to add a layer that
    can be used to ensure that the most frequently accessed data from the database is fetched
    in a faster and more efficient manner. Which of the following can be used to store the
    frequently accessed data?

    1. Amazon Simple Queue Service (Amazon SQS) queue

    2. Amazon Simple Notification Service (Amazon SNS) topic

    3. Amazon CloudFront distribution

    4. Amazon ElastiCache instance


  6. You have an application deployed to the AWS platform. The application makes requests
    to an Amazon Simple Storage Service (Amazon S3) bucket. After monitoring the Amazon
    CloudWatch metrics, you notice that the number of
    GET requests has suddenly spiked.
    Which of the following can be used to optimize Amazon S3 cost and performance?

    1. Add Amazon ElastiCache in front of the S3 bucket.

    2. Use Amazon DynamoDB instead of Amazon S3.

    3. Place an Amazon CloudFront distribution in front of the S3 bucket.

    4. Place an Elastic Load Balancing load balancer in front of the S3 bucket.


  7. You are writing an application that will store data in an Amazon DynamoDB table. The
    ratio of read operations to write operations will be 1,000 to 1, with the same data being
    accessed frequently. Which feature or service should you enable on the DynamoDB table to
    optimize performance and minimize costs?

    1. Amazon DynamoDB Auto Scaling

    2. Amazon DynamoDB cross-region replication

    3. Amazon DynamoDB Streams

    4. Amazon DynamoDB Accelerator


  8. A developer is migrating an on-premises web application to the AWS Cloud. The
    application currently runs on a 32-processor server and stores session state in memory. On
    Mondays, the server runs at 80 percent CPU utilization, but at only about 5 percent CPU
    utilization at other times. How should the developer change the code to optimize running
    in the AWS Cloud?

    1. Store session state on the Amazon EC2 instance store.

    2. Encrypt the session state in memory.

    3. Store session state in an Amazon ElastiCache cluster.

    4. Compress the session state in memory.

      Review Questions 883


  9. A company is using an ElastiCache cluster in front of their Amazon RDS instance. The
    company would like you to implement logic into the code so that the cluster retrieves data
    from Amazon RDS only when there is a cache miss. Which strategy can you implement to
    achieve this?

    1. Error retries

    2. Lazy loading

    3. Exponential backoff

    4. Write-through


  10. Your application will be hosted on an Amazon EC2 instance, which will be part of an AWS
    Auto Scaling group. The application must fetch the private IP of the instance. Which of the
    following can achieve this?

    1. Query the instance metadata.

    2. Query the instance user data.

    3. Have the application run ifconfig.

    4. Have an administrator get the IP address from the Amazon EC2 console.


  11. You just developed code in AWS Lambda that uses recursive functions. You see some
    throttling errors in the metrics. Which of the following should you do to resolve the issue?

    1. Use API Gateway to call the recursive code.

    2. Use versioning for the recursive function.

    3. Place the recursive function in a separate package.

    4. Avoid using recursive code in your function.


  12. A production application is making calls to an Amazon Relational Database Service
    (Amazon RDS) instance. The application’s reporting module is experiencing heavy traffic,
    causing performance issues. How can the application be optimized to alleviate this issue?

    1. Move the database to Amazon DynamoDB, and point the reporting module to the new
      DynamoDB table.

    2. Enable Multi-AZ for the database, and point the reporting module to the secondary
      database.

    3. Enable read replicas for the database, and point the reporting module to the read
      replica.

    4. Place an Elastic Load Balancing load balancer in front of the reporting part of the
      application.

  13. Your application uses Amazon S3 buckets. You have users in other countries accessing
    objects in those buckets. What can you do to reduce latency for those users outside of your
    country?

    1. Host a static website.

    2. Change the storage class.

    3. Enable cross-region replication.

    4. Enable encryption.

      884 Chapter 16 Optimization


  14. You have an application that uploads objects to Amazon S3 between 200–500 MB. The
    process takes longer than expected, and you want to improve the performance of the
    application. Which of the following would you consider?

    1. Enable versioning on the bucket.

    2. Use the multipart upload API.

    3. Write the items in batches for better performance.

    4. Create multiple threads to upload the objects.


  15. You must bootstrap your application script to instances that are launched inside an AWS
    Auto Scaling group. Which is the most optimal way to achieve this?

    1. Create a Lambda function to install the script.

    2. Place a scheduled task on the instance that starts on boot.

    3. Place the script in the instance user data.

    4. Place the script in the instance metadata.


AWS® Certified Developer Official Study Guide

By Nick Alteen, Jennifer Fisher, Casey Gerena, Wes Gruver, Asim Jalis,
Heiwad Osman, Marife Pagan, Santosh Patlolla and Michael Roth

$PQZSJHIU ¥ 2019 CZ Amazon Web Services, Inc.


Answers to Review
Questions

Appendix

image

886 Appendix Answers to Review Questions


Chapter 1: Introduction to AWS
Cloud API

  1. B. The specific credentials include the access key ID and secret access key. If the access key
    is valid only for a short-term session, the credentials also include a session token.

    AWS uses the user name and passwords for working with the AWS Management Console,
    not for working with the APIs. Data encryption uses the customer master keys, not API
    access.

  2. C. Most AWS API services are regional in scope. The service is running and replicating
    your data across multiple Availability Zones within an AWS Region. You choose a regional
    API endpoint either from your default configuration or by explicitly setting a location for
    your API client.

  3. A. The AWS SDK relies on access keys, not passwords. The best practice is to use AWS
    Identity and Access Management (IAM) credentials and not the AWS account creden-
    tials. Comparing IAM users or IAM roles, only IAM users can have long-term security
    credentials.

  4. C. Although you can generate IAM users for everyone, this introduces management over-
    head of a new set of long-term credentials. If you already have an external directory of your
    organization’s users, use IAM roles and identity federation to provide short-term, session-
    based access to AWS.


  5. A. The permissions for DynamoDBFullAccess managed policy grant access to all Amazon
    DynamoDB tables in your account. Write a custom policy to scope the access to a

    specific table. You can update the permissions of a user independently from the lifecycle of
    the table. DynamoDB does not have its own concept of users, but it uses the AWS API and
    relies on IAM.

  6. B. You can view or manage your AWS resources with the console, AWS CLI, or AWS SDK.
    The core functionality of each SDK is powered by a common set of web services on the
    backend. Most AWS services are isolated by AWS Region.

  7. B. If you look closely at the URL, the AWS Region string is incorrectly set as us-east-1a,
    which is specific to the Availability Zone. An AWS Region string ends in a number, and the
    correct configuration is
    us-east-1. If the error was related to API credentials, you would
    receive a more specific error related to credentials, such as
    AccessDenied.

  8. B. This policy allows access to the s3:ListBucket operation on example_bucket as a
    specific bucket. This does not grant access to operations on the objects within the bucket.
    IAM is granular. The date in the
    Version attribute is a specific version of the IAM policy
    language and not an expiration.


  9. D. The long-term credentials are not limited to a single AWS Region. IAM is a global ser-
    vice, and IAM user credentials are valid across different AWS Regions. However, when the
    API call is made, a signing key is derived from the long-term credentials, and that signing
    key is scoped to a region, service, and day.

    Chapter 2: Introduction to Compute and Networking 887


  10. B. The AssumeRole method of the AWS Security Token Service (AWS STS) returns the
    security credentials for the role that include the access key ID, secret access key, and session
    token. AWS Key Management Service (AWS KMS) is not used for API signing. The identity
    provider may provide a SAML assertion, but AWS STS generates the AWS API credentials.


  11. D. The DynamoDBReadOnlyAccess policy is a built-in policy that applies to the resource *
    wildcard, which means that it applies to any and all DynamoDB tables accessible from the
    account regardless of when those tables were created. Because IAM policies are related to
    the IAM user, not the access key, rotating the key does not affect the policy. IAM policies
    are also global in scope, so you do not need a custom one per AWS Region. You can add
    IAM users to IAM groups but not IAM roles. Instead, roles must be assumed for short-term
    sessions.


  12. B. The IAM trust policy defines the principals who can request role credentials from the
    AWS STS. Access policies define what API actions can be performed with the credentials
    from the role.

  13. C. You can define an IAM user for your new team member and add the IAM user to an
    IAM group to inherit the appropriate permissions. The best practice is
    not to use AWS
    account root user
    credentials. Though you can use AWS Directory Service to track users,
    this answer is incomplete, and the AWS KMS is not related to permissions. Roles can be
    assumed only for short-term sessions—there are no long-term credentials directly associ-
    ated with the role.

  14. C. The AWS API backend is accessed through web service calls and is operating system–
    and programming language–agnostic. You do not need to do anything special to enable
    specific programming languages other than downloading the appropriate SDK.

  15. B. The primary latency concern is for customers accessing the data, and there are no
    explicit dependencies on existing infrastructure in the United States. Physically locating the
    application resources closer to these users in Australia reduces the distance that the infor-
    mation must travel and therefore decreases the latency.


Chapter 2: Introduction to Compute
and Networking

  1. B. You launch Amazon Elastic Compute Cloud (Amazon EC2) instances into specific sub-
    nets that are tied to specific Availability Zones. You can look up the Availability Zone in
    which you have launched an Amazon EC2 instance. While an Availability Zone is part of a
    region, this answer is not the most specific. You do not get to choose the specific data cen-
    ter, and edge locations do not support EC2.

  2. B. When you stop an Amazon EC2 instance, its public IP address is released. When you
    start it again, a new public IP address is assigned. If you require a public IP address to be
    persistently associated with the instance, allocate an Elastic IP address. SSH key pairs and
    security group rules do not have any built-in expiration, and SSH is enabled as a service by
    default. It is available even after restarts. Security groups do not expire.

    888 Appendix Answers to Review Questions


  3. A. A restricted rule that allows RDP from only certain IP addresses may block your request
    if you have a new IP address because of your location. Because you are trying to connect to
    the instance, verify that an appropriate inbound rule is set as opposed to an outbound rule.
    For many variants of Windows, RDP is the default connection mechanism, and it defaults
    to enabled even after a reboot.

  4. A, D. The NAT gateway allows outbound requests to the external API to succeed while
    preventing inbound requests from the internet. Configuring the security group to allow
    only inbound requests from your web servers allows outbound requests to succeed because
    the default rule for the security group allows outbound requests to the APIs that your web
    service needs. Option B is incorrect because security group rules cannot explicitly deny
    traffic; they can only allow it. Option C is incorrect because network ACLs are stateless,
    and this rule would prevent all of the replies to your outbound web requests from entering
    the public subnet.

  5. C. You are in full control over the software on your instance. The default user that was
    created when the instance launched has full control over the guest operating system and
    can install the necessary software. Instance profiles are unrelated to the software on the
    instance.

  6. D. You can query the Amazon EC2 metadata service for this information. Networking
    within the Amazon Virtual Private Cloud (Amazon VPC) is based on private IP addresses,
    so this rules out options A and B. Because the metadata service is available, you are not
    required to use a third-party service, which eliminates option C.

  7. A. You can implement user data to execute scripts or directives that install additional
    packages. Even though you can use Amazon Simple Storage Service (Amazon S3) to stage
    software installations, there is no special bucket. You have full control of EC2 instances,
    including the software. AWS KMS is unrelated to software installation.

  8. A. Amazon EC2 instances are resizable. You can change the RAM available by changing
    the instance type. Option B is incorrect because you can change this attribute only when the
    instance is stopped. Although option C is one possible solution, it is not required. Option D
    is incorrect because the RAM available on the host server does not change the RAM alloca-
    tion for your EC2 instance.

  9. A. AWS generates the default password for the instance and encrypts it by using the public
    key from the Amazon EC2 key pair used to launch the instance. You do not select a pass-
    word when you launch an instance. You can decrypt this with the private key. IAM users
    and IAM roles are not for providing access to the operating system on the Amazon EC2
    instance.

  10. A, B, E. For an instance to be directly accessible as a web server, you must assign a pub-
    lic IP address, place the instance in a public subnet, and ensure that the inbound security
    group rules allow HTTP/HTTPS. A public subnet is one in which there is a direct route to

    an internet gateway. Option C defines a private subnet. Because security groups are stateful,
    you are not required to set the outbound rules—the replies to the inbound request are auto-
    matically allowed.

    Chapter 2: Introduction to Compute and Networking 889


  11. A, D. You can use an AMI as a template for launching any number of Amazon EC2
    instances. AMIs are available for various versions of Windows and Linux. Option B is false
    because AMIs are local to the region in which they were created unless they are explicitly
    copied. Option C is false because, in addition to AWS-provided AMIs, there are third-party
    AMIs in the marketplace, and you can create your own AMIs.

  12. B, D. Option B is true; Amazon Elastic Block Store (Amazon EBS) provides persistent stor-
    age for all types of EC2 instances. Option D is true because hardware accelerators, such as
    GPU and FGPA, are accessible depending on the type of instance. Option A is false because
    instance store is provided only for a few Amazon EC2 instance types. Option C is incorrect
    because Amazon EC2 instances can be resized after they are launched, provided that they
    are stopped during the resize. Hardware accelerators, such as GPU and FGPA, are acces-
    sible depending on the type of instance.

  13. B, D. Only instances in the running state can be started, stopped, or rebooted.

  14. D. Both the web server and the database are running on the same instance, and they can
    communicate locally on the instance. Option A is incorrect because security groups apply
    to only network traffic that leaves the instance. Option C is incorrect because network
    ACLs apply only to traffic leaving a subnet. Similarly, option B is incorrect because the
    public IP address is required for inbound requests from the internet but is not necessary for
    requests local to the same instance.

  15. C. A public subnet is one in which there is a route that directs internet traffic (0.0.0.0/0) to
    an internet gateway. None of the other routes provides a direct route to the internet, which
    is required to be a public subnet.

  16. D. A private subnet that allows outbound internet access must provide an indirect route to
    the internet. This is provided by a route that directs internet traffic to a NAT gateway or
    NAT instance. Option C is incorrect because a route to an internet gateway would make
    this a public subnet with a direct connection to the internet. The remaining options do not
    provide access to the internet.

  17. D. Amazon VPC Flow Logs have metadata about each traffic flow within your Amazon
    VPC and show whether the connection was accepted or rejected. The other responses do
    not provide a log of network traffic.

  18. C. Amazon CloudWatch is the service that tracks metrics, including CPU utilization for an
    Amazon EC2 instance. The other services are not responsible for tracking metrics.

  19. B. EBS volumes provide persistent storage for an Amazon EC2 instance. The data is per-
    sisted until the volume is deleted and therefore persists on the volume when the instance is
    stopped.

  20. F. You can install any software you want on an Amazon EC2 instance, including any inter-
    preters required to run your application code.

  21. B, C. Web requests are typically made on port 80 for HTTP and port 443 for HTTPS.
    Because security groups are stateful, you must set only the inbound rule. Options A and D
    are unnecessary because the security group automatically allows the outbound replies to the
    inbound requests.

    890 Appendix Answers to Review Questions


  22. B, D. The customer is responsible for the guest operating system and above. Options C and
    E fall under AWS responsibility. AWS is responsible for the virtualization layer, underlying
    host machines, and all the way down to the physical security of the facilities.


Chapter 3: Hello, Storage

  1. D. Amazon EC2 instance store is directly attached to the instance, which will give you the
    lowest latency between the disk and your application. Instance store is also provided at no
    additional cost on instance types that have it available, so this is the lowest-cost option.
    Additionally, since the data is being retrieved from somewhere else, it can be copied back to
    an instance as needed.

    Option A is incorrect because Amazon S3 cannot be directly mounted to an Amazon EC2
    instance.

    Options B and C are incorrect because Amazon EBS and Amazon Elastic File System
    (Amazon EFS) would be a higher-cost option with higher latency than instance store.

  2. D, E. Objects are stored in buckets and contain both data and metadata.
    Option A is incorrect because Amazon S3 is object storage, not block storage.

    Option B is incorrect because objects are identified by a URL generated from the bucket
    name, service region endpoint, and key name.

    Option C is incorrect because Amazon S3 object can range in size from a minimum of 0
    bytes to a maximum of 5 TB.

  3. B. The volume is created immediately, but the data is loaded lazily, meaning that the
    volume can be accessed upon creation, and if the data being requested has not yet been
    restored, it will be restored upon first request.

    Options A and C are incorrect because it does not matter what the size of the volume is or
    the amount of the data that is stored on the volume. Lazy loading will get data upon first
    request as needed while the volume is being restored.

    Option D is incorrect because an Amazon EBS-optimized instance provides additional, dedi-
    cated capacity for Amazon EBS I/O. This minimizes contention, but it does not increase or
    decrease the amount of time before the data is made available while restoring a volume.

  4. A, B, D. Option C is incorrect because Amazon S3 is accessible through a URL. Amazon
    EFS is an AWS service that can be mounted to the file system of multiple Amazon EC2
    instances. Amazon S3 can be accessible to multiple EC2 instances, but not through a file
    system mount.

    Option E is incorrect because, unlike Amazon EBS volumes, storage in a bucket does not
    need to be pre-allocated and can grow in a virtually unlimited manner.

  5. A, C. Amazon Simple Storage Service Glacier is optimized for long-term archival stor-
    age and is not suited to data that needs immediate access or short-lived data that is erased
    within 90 days.

    Chapter 3: Hello, Storage 891


  6. B. Option B is correct because pre-signed URLs allow you to grant time-limited permission
    to download objects from an Amazon S3 bucket.

    Option A is incorrect because static web hosting requires world-read access to all
    content.

    Option C is incorrect because AWS IAM policies do not know who are the authenticated
    users of your web application, as these are not IAM users.

    Option D is incorrect because logging can help track content loss, but not prevent it.


  7. A, D. Option A is correct because the data is automatically replicated within an availability
    zone.

    Option D is correct because Amazon EBS volumes persist when the instance is stopped.
    Option B is incorrect. There are no tapes in the AWS infrastructure.

    Option C is incorrect because Amazon EBS volumes can be encrypted upon creation and
    used by an instance in the same manner as if they were not encrypted.

  8. C. The Max I/O performance mode is optimized for applications where tens, hundreds, or
    thousands of EC2 instances are accessing the file system. It scales to higher levels of aggre-
    gate throughput and operations per second with a trade-off of slightly higher latencies for
    file operations.

    Option A is incorrect because the General-Purpose performance mode in Amazon EFS is
    appropriate for most file systems, and it is the mode selected by default when you create a
    file system. However, when you need concurrent access from 10 or more instances to the file
    system, you may need to increase your performance.

    Option B is incorrect. This is an option to increase I/O throughput for Amazon EBS vol-
    umes by connecting multiple volumes and setting up RAID 0 to increase overall I/O.

    Option D is incorrect. Changing to a larger instance size will increase your cost for com-
    pute, but it will not improve the performance for concurrently connecting to your Amazon
    EFS file system from multiple instances.

  9. A, B, D. Options A, B, and D are required, and optionally you can also set a friendly
    CNAME to the bucket URL.

    Option C is incorrect because Amazon S3 does not support FTP transfers.
    Option E is incorrect because HTTP does not need to be enabled.

  10. C. A short period of heavy traffic is exactly the use case for the bursting nature of general-
    purpose SSD volumes—the rest of the day is more than enough time to build up enough
    IOPS credits to handle the nightly task.

    Option A is incorrect because to set up a Provisioned IOPS SSD volume to handle the peak
    would mean overprovisioning and spending money for more IOPS than you need during
    off-peak time.

    Option B is incorrect because instance stores are not durable.

    Option D is incorrect because magnetic volumes cannot provide enough IOPS.

    892 Appendix Answers to Review Questions


  11. C, D, E. Option A is incorrect because you store data in Amazon S3 Glacier as an archive.
    You upload archives into vaults. Vaults are collections of archives that you use to organize
    your data. Amazon S3 stores data in objects that live in buckets.

    Option B is incorrect because archives are identified by system-created archive IDs, not key
    names like in S3.

  12. A. Amazon EFS supports one to thousands of Amazon EC2 instances connecting to a file
    system concurrently.

    Options B and C are incorrect because Amazon EBS and Amazon EC2 instance store can
    be mounted only to a single instance at a time.

    Option D is incorrect because Amazon S3 does not provide a file system connection, but
    rather connectivity over the web. It cannot be mounted to an instance directly.

  13. B. There is no delay in processing when commencing a snapshot.

    Options A and C are incorrect because the size of the volume or the amount of the data that
    is stored on the volume does not matter. The volume will be available immediately.

    Option D is incorrect because an Amazon EBS-optimized instance provides additional,
    dedicated capacity for Amazon EBS I/O. This minimizes contention, but it does not change
    the fact that the volume will still be available while taking a snapshot.

  14. B, C, E. Amazon S3 bucket policies can specify a request IP range, an AWS account, and a
    prefix for objects that can be accessed.

    Options A and D are incorrect because bucket policies cannot be restricted by company
    name or country of origin.

  15. B, D. Option B is incorrect because Amazon S3 cannot be mounted to an Amazon EC2
    instance like a file system.

    Option D is incorrect because Amazon S3 should not serve as primary database storage
    because it is object storage, not transactional block-based storage. Databases are generally
    stored on disk in one or more large files. If you needed to change one row in a database, the
    entire database file would need to be updated in Amazon S3, and every time you needed to
    access a record, you’d need to download the whole database.

  16. B, C, E. Option A is incorrect because static web hosting does not restrict data access. You
    can host a website on Amazon S3, but the bucket must have public read access, so everyone
    in the world will have read access to this bucket.

    Option B is correct because creating a presigned URL for an object optionally allows you to
    share objects with others.

    Option C is correct because Amazon S3 access control lists (ACLs) enable you to manage
    access to buckets and objects, defining which AWS accounts or groups are granted access
    and the type of access.

    Option D is incorrect because using an Amazon S3 lifecycle policy does not restrict data
    access. Lifecycle policies can be used to define actions for Amazon S3 to take during an
    object’s lifetime (for example, transition objects to another storage class, archive them, or
    delete them after a specified period of time).

    Chapter 3: Hello, Storage 893


    Option E is correct because a bucket policy is a resource-based AWS IAM policy that
    allows you to grant permission to your Amazon S3 resources for other AWS accounts or
    IAM users.

  17. C, E. Option A is incorrect because even though you get increased redundancy with using
    cross-region replication, that does not protect the object from being deleted.

    Option B is incorrect because vault locks are a feature of Amazon S3 Glacier, not a
    feature of Amazon S3.

    Option D is incorrect because a lifecycle policy would move the object to Amazon
    Glacier, moving it out of your intended storage in S3 and reducing the time to access the
    data, and it does not prevent it from being deleted once it arrives in Amazon

    S3 Glacier.

    C and E are correct. Versioning protects data against inadvertent or intentional deletion by
    storing all versions of the object, and MFA Delete requires a one-time code from a multi-
    factor authentication (MFA) device to delete objects.

  18. C. To track requests for access to your bucket, enable access logging. Each access log
    record provides details about a single access request, such as the requester, bucket name,
    request time, request action, response status, and error code (if any). Access log information
    can be useful in security and access audits. It can also help you learn about your customer
    base and understand your Amazon S3 bill.

  19. A, B, D. Option A is correct because cross-region replication allows you to replicate data
    between distance AWS Regions to satisfy these requirements.

    Option B is correct because this can minimize latency in accessing objects by maintaining
    object copies in AWS Regions that are geographically closer to your users.

    Option D is correct because you can maintain object copies in both regions, allowing lower
    latency by bringing the data closer to the compute.

    Option C is incorrect because cross-region replication does not protect against accidental
    deletion.

    Option E is incorrect because Amazon S3 is designed for 11 nines of durability for objects
    in a single region. A second region does not significantly increase durability.

  20. C. If data must be encrypted before being sent to Amazon S3, client-side encryption must
    be used.

    Options A, B, and D are incorrect because they use server-side encryption. This will only
    encrypt the data at rest in Amazon S3, not prior to transit to Amazon S3.

  21. B. Data is automatically replicated across at least three Availability Zones within a single
    region.

Option A is incorrect because you can optionally choose to replicate data to other regions,
but that is not done by default.

Option C is incorrect because versioning is optional, and data in Amazon S3 is durable
regardless of turning on versioning.

Option D is incorrect because there are no tapes in the AWS infrastructure.

894 Appendix Answers to Review Questions


Chapter 4: Hello, Databases

  1. B, D, E. Amazon Relational Database Service (Amazon RDS) manages the work involved
    in setting up a relational database, from provisioning the infrastructure capacity to install-
    ing the database software. After your database is up and running, Amazon RDS automates
    common administrative tasks, such as performing backups and patching the software that
    powers your database. Option A is incorrect. Because Amazon RDS provides native data-
    base access, you interact with the relational database software as you normally would. This
    means that you’re still responsible for managing the database settings that are specific to
    your application. Option C is incorrect. You need to build the relational schema that best
    fits your use case and are responsible for any performance tuning to optimize your database
    for your application’s workflow and query patterns.


  2. B. Amazon Neptune is a fast, reliable, fully managed graph database to store and manage
    highly connected datasets. Option A is incorrect because Amazon Aurora is a managed
    SQL database that is meant for transactional workloads that are ACID-compliant. Option
    C is incorrect because this is a managed NoSQL database service, which is meant for more
    key-value datasets with no relationships. Option D is incorrect because Amazon Redshift
    is a data warehouse that can be used for running analytical queries (OLAP) on data ware-
    houses that are petabytes in scale.


  3. B. NoSQL databases, such as Amazon DynamoDB, excel at scaling to hundreds of thou-
    sands of requests with key-value access to user profile and session. Option A is incorrect
    because the session state is typically suited for small amounts of data, and DynamoDB can
    scale more effectively with this type of dataset. Option C is incorrect because Amazon Red-
    shift is a data warehouse service that is used for analytical queries on petabyte scale datas-
    ets, so it would not be a good solution. Option D is incorrect because DynamoDB provides
    scale, whereas MySQL on Amazon EC2 eventually becomes bottlenecked. Additionally,
    NoSQL databases are much faster and more scalable for this type of dataset.


  4. A. 1 RCU = One strongly consistent read per second of 4 KB.
    15 KB is four complete chunks of 4 KB (4 × 4 = 16).

    So you need 25 × 4 = 100 RCUs.


  5. C. 1 RCU = Two eventually consistent reads per second of 4 KB.
    15 KB is four complete chunks of 4 KB (4 × 4 = 16).

    So you need (25 × 4) / 2 = 50 RCUs.


  6. D. 1 WCU = 1 write per second of 1 KB (1024 bytes).

    512 bytes uses one complete chunk of 1 KB (512/1024 = 0.5, rounded up to 1).
    So you need 100 × 1 = 100 WCUs.

  7. B. Amazon DynamoDB Accelerator (DAX) is a write-through caching service that quickly
    integrates with DynamoDB with a few quick code changes. DAX will seamlessly inter-
    cept the API call, and your caching solution will be up and running in a short amount of

    Chapter 5: Encryption on AWS 895


    time. Option A is incorrect because you could implement your own solution; however, this
    would likely take a significant amount of development time. Option C is incorrect because
    your company would like to get the service up and running quickly. Implementing Redis
    on Amazon EC2 to meet your application’s needs would take additional time. Option D is
    incorrect for many of the same reasons as option C, as time is a factor here. Additionally,
    your company would like to refrain from managing more EC2 instances, if possible.


  8. B. With Amazon ElastiCache, only Redis can be run in a high-availability configuration.
    Option A is incorrect because this would add complexity to your architecture. It would also
    likely introduce additional latency, as the company is already using Amazon RDS. Option
    C is incorrect because ElastiCache for Memcached does not support a high-availability
    configuration. Option D is incorrect because DAX is a caching mechanism that is used for
    DynamoDB, not Amazon RDS.

  9. C. Amazon Redshift is the best option. It is a managed AWS data warehouse service that
    allows you to scale up to petabytes worth of data, which would definitely meet their needs.
    Option A is incorrect because Amazon RDS cannot store that much data; the limit of
    Amazon RDS for Aurora is 64 TB. Option B is incorrect because DynamoDB is not meant
    for analytical-type queries—it is meant for simple queries and key-value pair data, which
    is more transactional based. You can query based on only the partition and sort key in
    DynamoDB. Option D is incorrect because Amazon ElastiCache is a caching solution that

    is meant for temporary data. However, you could store queries that ran in Amazon Redshift
    inside ElastiCache. This would improve the performance of frequently run queries, but by
    itself is not a solution.

  10. A. Scans are less efficient than queries. When possible, always use queries with
    DynamoDB. Option B is incorrect because doing nothing isn’t a good solution; the problem
    is unlikely to go away. Option C is incorrect because a strongly consistent read would actu-
    ally be a more expensive query in terms of compute and cost. Strongly consistent reads cost
    twice as much as eventually consistent reads. Option D is incorrect because the concern is
    with reading data, not writing data. WCUs are write capacity units.


Chapter 5: Encryption on AWS

  1. B, D, E. Option A is incorrect because data can be encrypted in any location (on-premises
    or in the AWS Cloud). Option C is incorrect because encryption keys should be stored in a
    secured hardware security module (HSM). Option B is correct because there must be data
    to encrypt in order to use an encryption system. Option D is correct because tools and a
    process must be in place to perform encryption. Option E is correct because encryption
    requires a defined algorithm.

  2. A, C. Option B is incorrect because KMI does not have a concept of a data layer. Option D
    is incorrect because KMI does not have a concept of an encryption layer. Option A is cor-
    rect because the storage layer is responsible for storing encryption keys. Option C is correct
    because the management layer is responsible for allowing authorized users to access the
    stored keys.

    896 Appendix Answers to Review Questions


  3. A, C, D. Option A is correct because this is a common method to offload the responsibility
    of key storage while maintaining customer-owned management processes. Option C is cor-
    rect because customers can use this approach to fully manage their keys and KMI. Option
    D is correct because AWS Key Management Service (AWS KMS) supports both encryption
    and KMI. Option B is incorrect because this would imply significant overhead to manage
    the storage while not providing customer benefits.

  4. D. Option A is incorrect; with SSE-S3, Amazon S3 is responsible for encrypting the
    objects, not AWS KMS. Option B is incorrect because the customer provides the key to
    the Amazon S3 service. Option C is incorrect because the question specifically states that
    server-side encryption is used. Option D is correct because none of the other options listed
    server-side encryption with AWS KMS (SSE-KMS), whereby AWS KMS manages the keys.

  5. B. Option A is incorrect. AWS KMS does not currently support asymmetric encryption.
    Option B is correct because AWS CloudHSM supports both asymmetric and symmet-
    ric encryption. Options C and D are incorrect because CloudHSM supports asymmetric
    encryption.

  6. A, B. Option A is correct because AWS KMS uses AES-256 as its encryption algorithm.
    Option B is correct because CloudHSM supports a variety of symmetric encryption
    options. Options C and D are incorrect because AWS KMS and CloudHSM support sym-
    metric encryption options.

  7. C. Option A is incorrect because the organization does not want to manage any of the
    encryption keys. With AWS KMS, it will have to create customer master keys (CMKs).
    Option B is incorrect because by using customer-provided keys, the organization would
    have to manage the keys. Option C is correct because Amazon S3 manages the encryption
    keys and performs rotations periodically. Option D is incorrect because SSE-S3 provides
    this option.

  8. C. Option A is incorrect because AWS KMS provides a centralized key management dash-
    board; however, this feature does not leverage CloudHSM. Option B is incorrect because
    you want to use AWS KMS
    with CloudHSM and not use it as a replacement for AWS KMS.
    Option C is correct because custom key stores allow AWS KMS to store keys in an Cloud-
    HSM cluster. Option D is incorrect because S3DistCp is a feature for Amazon Redshift
    whereby it copies data from Amazon S3 to the cluster.

  9. A. Option A is correct because AWS KMS provides the simplest solution with little devel-
    opment time to implement encryption on an Amazon EBS volume. Option B is incorrect
    because even though you can use open source or third-party tooling to encrypt volumes,
    there would be some setup and configuration involved. Using CloudHSM would also
    require some configuration and setup, so option C is incorrect. Option D is incorrect
    because AWS KMS enables you to encrypt Amazon EBS volumes.


  10. D. Options A, B, and C are incorrect because AWS KMS integrates with all these services.

    Chapter 6: Deployment Strategies 897


    Chapter 6: Deployment Strategies

    1. D. Option D is correct because AWS CodePipeline is a continuous delivery service for fast
      and reliable application updates. It allows the developer to model and visualize the software
      release process. CodePipeline automates your build, test, and release process when there is a
      code change.

      Option A is incorrect because AWS CodeCommit is a secure, highly scalable, managed
      source control service that hosts private Git repositories.

      Option B is incorrect because AWS CodeDeploy automates code deployments to any
      instance and handles the complexity of updating your applications.

      Option C is incorrect because AWS CodeBuild compiles source code, runs tests, and pro-
      duces ready-to-deploy software packages.

    2. A, B, C, D. A, B, C, and D are correct because you can use them all to create a web server
      environment with AWS Elastic Beanstalk.

      Option E is incorrect because AWS Lambda is an event-driven, serverless computing plat-
      form that runs code in response to events. Lambda automatically manages the computing
      resources required by that code.

    3. C. Elastic Beanstalk supports Java, Node.js, and Go, so options A, B, and D are incorrect.
      It does not support Objective C, so option C is the correct answer.

    4. A. Elastic Beanstalk deploys application code and the architecture to support an environ-
      ment for the application to run.

    5. A, C. Elastic Beanstalk supports Linux and Windows. No support is available for an
      Ubuntu-only operating system, Fedora, or Jetty.

    6. A, B. Elastic Beanstalk can run Amazon EC2 instances and build queues with Amazon
      SQS.

    7. A, B. Elastic Beanstalk can access Amazon S3 buckets and connect to Amazon RDS data-
      bases. It cannot install Amazon GuardDuty agents or create or manage Amazon WorkSpaces.

    8. C. By using IAM policies, you can control access to resources attached to users, groups,
      and roles.

    9. B, C. Elastic Beanstalk creates a service role to access AWS services and an instance role to
      access instances.

    10. C. Elastic Beanstalk runs at no additional charge. You incur charges only for services
      deployed.

    11. D. Charges are incurred for all accounts that use the allocated resources.

    12. C. An existing Amazon RDS instance is deleted if the environment is deleted. There is no
      auto-retention of the database instance. You must create a snapshot to retain the data and
      to restore the database.

898 Appendix Answers to Review Questions


Chapter 7: Deployment as Code

  1. A. Options B and D are incorrect because the deployment is already in progress, and this
    would not be possible if the AWS CodeDeploy agent had not been installed and running
    properly. The CodeDeploy agent sends progress reports to the CodeDeploy service. The
    service does not attempt to query instances directly, and the Amazon EC2 API does not
    interact with instances at the operating system level. Thus, option C is incorrect, and option
    A is correct.

  2. B. Option B is correct because the ApplicationStop lifecycle event occurs before any
    new deployment files download. For this reason, it will not run the first time a deployment
    occurs on an instance. Option C is incorrect, as this is a valid lifecycle event. Option A is

    incorrect. Option D is incorrect because lifecycle hooks are not aware of the current state of
    your application. Lifecycle hook scripts execute any listed commands.

  3. A. Option B requires precise timing that would be overly burdensome to add to a CI/CD
    workflow. Option C would not include edge cases where both sources are updated within
    a small time period and would require separate release cadences for both sources. Option
    D is incorrect, as AWS CodePipeline supports multiple sources. When multiple sources are
    configured for the same pipeline, the pipeline will be triggered when any source is updated.

  4. C. Option A is incorrect because storing large binary objects in a Git-based repository can
    incur massive storage requirements. Any time a binary object is modified in a repository, a
    new copy is saved. Comparing cost to Amazon S3 storage, it is more expensive to take this
    approach. By building the binary objects into an Amazon Machine Image (AMI), you are
    required to create a new AMI any time changes are made to the objects; thus, option B is
    incorrect. Option D and E introduce unnecessary cost and complexity into the solution. By
    using both an AWS CodeCommit repository and Amazon S3 archive, the lowest cost and
    easiest management is achieved.


  5. D. Option A is incorrect because rolling deployments without an additional batch would
    result in less than 100 percent availability, as one batch of the original set of instances
    would be taken out of circulation during the deployment process. Option B is incorrect
    because if you add an additional batch, it would ensure 100 percent availability at the low-
    est cost but would require a longer update process than replacing all instances at once.
    Option C is incorrect because, by default, blue/green deployments will leave the original
    environment intact, accruing charges until it is manually deleted. Option D is correct as
    immutable updates would result in the fastest deployment for the lowest cost. In an immu-
    table update, a new Auto Scaling group is created and registered with the load balancer.
    Once health checks pass, the existing Auto Scaling group is terminated.


  6. D. Option C is incorrect because Amazon S3 does not have a concept of service roles.
    When a pipeline is initiated, it is done in response either to a change in a source or when
    a previous change is released by an authorized AWS IAM user or role. However, after the
    pipeline has been initiated, the AWS CodePipeline service role is used to perform pipeline
    actions. Thus, options A and B are incorrect. Option D is correct, because the pipeline’s
    service role requires permissions to download objects from Amazon S3.

    Chapter 7: Deployment as Code 899


  7. B. Option A is incorrect because this output is used only in the CodeBuild console. Option
    D is incorrect because CodeBuild natively supports this functionality. Though option C
    would technically work, CodeBuild supports output artifacts in the
    buildspec.yml speci-
    fication. The BuildSpec includes a
    files directive to indicate any files from the build envi-
    ronment that will be passed as output artifacts. Thus, option B is correct.


  8. C. Option A is incorrect because a custom build environment would expose the secrets
    to any user able to create new build jobs using the same environment. Option B is also
    incorrect. Though uploading the secrets to Amazon S3 would provide some protection,
    administrators with Amazon S3 access may still be able to view the secrets. Option D is

    incorrect because AWS does not recommend storing sensitive information in source control
    repositories, as it is easily viewed by anyone with access to the repository. Option D is cor-
    rect. By encrypting the secrets with AWS KMS and storing them in AWS Systems Manager
    Parameter Store, you ensure that the keys are protected both at rest and in transit. Only
    AWS IAM users or roles with permissions to both the key and parameter store would have
    access to the secrets.

  9. A. Options B, C, D, and E are incorrect. AWS Lambda functions can execute as part of a
    pipeline only with the Invoke action type.

  10. A, B. Options D and E are incorrect because FIFO/LIFO are not valid pipeline action con-
    figurations. Option C is incorrect because pipeline stages support multiple actions. Pipeline
    actions can be specified to occur both in series and in parallel within the same stage. Thus,
    options A and B are correct.

  11. D. Option A is incorrect because it will only create or update a stack, not delete the exist-
    ing stack. Option B is incorrect because the desired actions are in the wrong order. Option
    C is incorrect because the final action, “Replace a failed stack,” is not needed. Option D is
    correct. Only two actions are required. First, the stack must be deleted. Second, the replace-
    ment stack can be created. Unless otherwise required, however, both actions can be essen-
    tially accomplished by using one “Create or update a stack” action.

  12. D. Option A is incorrect. AWS CodeCommit is fully compatible with existing Git tools,
    and it also supports authentication with AWS Identity and Access Management (IAM)
    credentials. Options B and C are incorrect. These are the only protocols over which you
    can interact with a repository. You can use the CodeCommit credential helper to
    convert an IAM access key and secret access key to valid Git credentials for SSH and
    HTTPS authentication. Thus, option D is correct.

  13. C. Options A, B, and D are all valid Amazon Simple Notification Service (Amazon SNS)
    notification event sources for CodeCommit repositories. Option C is correct because Ama-
    zon SNS notifications cannot be configured to send when a commit is made to a repository.

  14. C, E. Options A, B, and D are incorrect because these action types do not support Code-
    Build projects. Options C and E are correct because CodeBuild projects can be executed in
    a pipeline as part of build and test actions.

  15. D. Environment variables in CodeBuild projects are not encrypted and are visible using the
    CodeBuild API. Thus, options A, B, and C are incorrect. If you need to pass sensitive infor-
    mation to build containers, use Systems Manager Parameter Store instead. Thus, option D
    is correct.

    900 Appendix Answers to Review Questions


  16. A. Because AWS does not have the ability to create or destroy infrastructure in customer
    data centers, options B, C, and D are incorrect. Option A is correct because on-premises
    instances support only in-place deployments.

  17. C. Options A and B are incorrect because AWS CodeDeploy will not modify files on
    an instance that were not created by a deployment. Option D is incorrect because this

    approach could result in failed deployments because of missing settings in your configura-
    tion file. Option C is correct. By default, CodeDeploy will not remove files that it does not
    manage. This is maintained as a list of files on the instance.

  18. C. Option A is incorrect because function versions cannot be modified after they have been
    published. Option B is also incorrect because function version numbers cannot be changed.
    Aliases can be used to point to different function versions; however, the alias itself can-

    not be overwritten (it is a pointer to a function version). Thus, option D is incorrect. AWS
    Lambda does not support in-place deployments. This is because, after a function version
    has been published, it cannot be updated. Option C is correct.

  19. C. AWS CodePipeline requires that every pipeline contain a source stage and at least one
    build or deploy stage. Thus, the minimum number of stages is 2.

  20. C. Option A is not correct because deleting the old revisions will temporarily resolve the
    issue. However, future deployments will continue to consume disk space. The same reason-
    ing applies to options B and D, which are also temporary solutions to the problem. The
    CodeDeploy agent configuration file includes a number of useful settings. Among these, a
    limit can be set on how many revisions to store on an instance at any point in time. Thus,
    option C is correct.


Chapter 8: Infrastructure as Code

  1. D. Only the Resources section of a template is required. If this section is omitted, AWS
    CloudFormation has no resources to manage. However, a template does not require
    Param-
    eters
    , Metadata, or AWSTemplateFormatVersion. Thus, options A, B, C, and E are
    incorrect.


  2. E. The return value of the Ref intrinsic function for an AWS::ElasticLoadBalancing::
    LoadBalancer
    resource is the load balancer name, which is not valid in a URL, so option A
    is incorrect. Since the application server instances are in a private subnet, neither will have
    a public DNS name; thus, option B is incorrect. Option C uses incorrect syntax for the
    Ref
    intrinsic function. Option D attempts to output a URL for the database instance. Thus,
    option E is correct.


  3. A, C, D. If account limits were preventing the launch of additional instances, the stack
    creation process would fail as soon as AWS CloudFormation attempts to launch the
    instance (the Amazon EC2 API would return an error to AWS CloudFormation in

    this case). Thus, option B is incorrect. Any issues preventing the instance from calling
    cfn-signal and sending a success/failure message to AWS CloudFormation would cause
    the creation policy to time out. Thus, options A, C, and D are correct answers.

    Chapter 8: Infrastructure as Code 901


  4. C. Option A is incorrect because AWS CloudFormation does not monitor the status of your
    database and would not be able to determine whether the database is corrupted. It also
    does not track whether there are currently running transactions before attempting updates.
    Thus, option E is incorrect. If an invalid update is submitted, the stack generates an error
    message when attempting the database update. Thus, option D is incorrect. Though option
    B would work, it is not needed to remove the database from the stack and manage it
    separately. Option C is correct because an AWS CloudFormation service role extends the
    default timeout value for stack actions to allow you to manage resources with longer update
    periods.


  5. A. Custom resource function permissions are obtained by a function execution role, not
    the service role invoking the stack update; thus, option B is incorrect. When the AWS
    Lambda function corresponding to a custom resource no longer exists, the custom resource
    will fail to update immediately; thus, option C is incorrect. However, if the custom resource
    function is executed but does not provide a response to the AWS CloudFormation service
    endpoint, the resource times out with the aforementioned error. Thus, option A is correct.

  6. A. AWS CloudFormation processes transformations by creating a change set, which
    generates an AWS CloudFormation supported template. Without the
    AWS::Serverless
    transform, AWS CloudFormation cannot process the AWS SAM template. For any stack
    in your account, the current template can be downloaded using the
    get-stack-template
    AWS CLI command. This command will return templates as processed by AWS CloudFor-

    mation; thus, option B is incorrect. Option C is also incorrect, because the original template
    is not saved before executing the transform. Option D is also incorrect, as AWS CloudFor-
    mation saves the current template for all stacks.

  7. E. AWS SAM supports other AWS CloudFormation resources, and it is not limited to defin-
    ing only
    AWS::Serverless::* resource types; thus, option D is incorrect, and option A is
    correct. However, the
    AWS::Serverless transform will not automatically associate server-
    less functions with
    AWS::ApiGateway::RestApi resources. The transform will automati-
    cally associate any functions with the serverless API being declared, or it will create a new
    one when the transform is executed. Thus, option B is also correct. Option C is also correct
    because AWS Serverless also supports Swagger definitions to outline the endpoints of your
    OpenAPI specification.

  8. A. The cfn-init helper script is used to define which packages, files, and other configura-
    tions will be performed when an instance is first launched. The
    cfn-signal helper script is
    used to signal back to AWS CloudFormation when a resource creation or update has com-
    pleted, so options B and C are incorrect. Option D is incorrect because
    cfn-update, is not
    a valid helper script. The
    cfn-hup helper script performs updates on an instance when its
    parent stack is updated. Thus, option A is correct.


  9. C. Wait conditions accept only one signal and will not track additional signals from the
    same resource; thus, options A and B are incorrect.
    WaitCount is an invalid option type, so
    option D is incorrect. Option C is correct because creation policies enable you to specify a
    count and timeout.

    902 Appendix Answers to Review Questions


  10. A. Options B and C will affect resources in your account. Option D would let you see the
    syntax differences between two template versions, but this does not indicate what type of
    updates will happen on the resources themselves. Thus, option D is incorrect. Change sets
    create previews of infrastructure changes without actually executing them. After reviewing
    the changes that will be performed, the change set can be executed on the target stack.

  11. B. Option A is incorrect, as this is a supported feature of nested stacks. Option C creates a
    circular dependency between the parent and child stacks (the parent stack needs to import
    the value from the child stack, which cannot be created until the parent begins creation).
    Option D is incorrect because cross-stack references are not possible without exporting and
    importing outputs. Option B uses intrinsic functions to access resource properties in the
    same manner as any other stack resource.

  12. B. AWS CloudFormation does not assume full administrative control on your account, and
    it requires permissions to interact with resources you own. AWS CloudFormation can oper-
    ate using a service role; however, this must be explicitly passed as part of the stack opera-
    tion. Otherwise, it will execute with the same permissions as the user performing the stack
    operation. Thus, option B is the correct answer.

  13. C. Because the reference to the Amazon DynamoDB table is made as part of an arbitrary
    string (the function code), AWS CloudFormation does not recognize this as a dependency
    between resources. To prevent any potential errors, you would need to declare explicitly
    that the function depends on the table. Thus, option C is correct.

  14. E. Replacing updates results in the deletion of the original resource and the creation of a
    replacement. AWS CloudFormation creates the replacement first with a new physical ID
    and verifies it before deleting the original. Because of this, option E is correct (all of the
    above).

  15. B, C. Option A is incorrect, as it states that no interruption will occur. Options D and E
    are not valid update types. Replacing updates delete the original resource and provision
    a replacement. Updates with some interruption have resource downtime, but the original
    resource is not replaced. Thus, options B and C are correct.

  16. A. The export does not need to be removed from the stack before it can be deleted, so
    option B is incorrect. Options C and D are also incorrect, as the stack does not need to be
    deleted. However, the stack cannot be deleted until any other stacks that import the value
    remove the import. Thus, option A is correct.


  17. B, D, E. If a stack update fails for any reason, the next state would be UPDATE_ROLLBACK_
    IN_PROGRESS
    , which must occur before the rollback fails or completes. A stack that is
    currently updating can either complete the update, fail to update, or complete and clean up
    old resources. Thus, options B, D, and E are correct.


  18. B. Because the stack status shows the update has completed, you know that the update
    did not fail. This means that options A and D are incorrect. When a stack updates and
    resources are created, they will not be deleted unless the update fails. Thus, option C is
    incorrect. Old resources that are no longer required are removed during the cleanup phase.
    Thus, option B is correct.

  19. A, C. AWS CloudFormation currently supports JSON and YAML template formats only.

    Chapter 9: Configuration as Code 903


  20. E. AWS CloudFormation provides a number of benefits over procedural scripting. The risk
    of human error is reduced because templates are validated by AWS CloudFormation before
    deployment. Infrastructure is repeatable and versionable using the same process as applica-
    tion code development. Individual users provisioning infrastructure need a reduced scope of
    permissions when using AWS CloudFormation service roles. Thus, option E is correct.

  21. B. Option C is incorrect because, though on-premises servers can be part of a custom
    resource’s workflow, they do not receive requests directly. Options D and E are incorrect
    because specific actions are not declared in custom resource properties. Option A is incor-
    rect because AWS services themselves do not process custom resource requests. Specifically,
    Amazon SNS topics and AWS Lambda functions can act as recipients to custom resource
    requests. Thus, option B is correct.

  22. C. Options A and B are incorrect because they would require interacting with other AWS
    services using the AWS CLI. For certain situations, such as running arbitrary commands in
    Amazon EC2 instance user data scripts, this would work. However, not all resource types
    have this ability. Option D is incorrect, as this is a built-in functionality of AWS CloudFor-
    mation. Option C is correct because any data that is declared in a custom resource response
    is accessible to the remainder of the template using the
    Fn::GetAtt intrinsic function.


Chapter 9: Configuration as Code

  1. E. You can raise all of the limits listed by submitting a limit increase request to AWS Support.

  2. D. Option A is incorrect because instances do not attempt to download new cookbooks
    when performing Chef runs. Option B is incorrect because AWS OpsWorks Stacks does not
    have a concept of cookbook caching. Option C is incorrect because lifecycle events do not
    allow you to specify cookbook versions. Option D is correct because after updating a cus-
    tom cookbook repository, any currently online instances will not automatically receive the
    updated cookbooks. To upload the modified cookbooks to the instances, you must first run
    the
    Update Custom Cookbooks command.

  3. B. Options A, C, and D are incorrect because OpsWorks Stacks provides integration with
    Elastic Load Balancing to handle automatic registration and deregistration. Option B is cor-
    rect as the Elastic Load Balancing layers for OpsWorks Stacks automatically register instances
    when they come online and deregister them when they move to a different state. You can also
    enable connection draining to prevent deregistration until any active sessions end.

  4. A, B. Option C is incorrect because changing the cluster capacity will not affect service
    scaling. Option D is incorrect because submitting a replacement will result in the same
    behavior. If there are insufficient resources to launch replacement tasks when a service
    updates, Amazon Elastic Container Service (Amazon ECS) will continue to attempt to
    launch the tasks until it is able to do so. If you increase the cluster size, additional resources
    add to the pool to allow the new task to start. After it has done so, the old task will termi-
    nate. After it terminates, the cluster can scale back to its original size. If the downtime of
    this service does not concern you, set the minimum in-service percentage to 0 percent to
    allow Amazon ECS to terminate the currently running task before it launches the new one.
    Thus, options A and B are correct.

    904 Appendix Answers to Review Questions


  5. B. Options A, C, and D are incorrect because no other parties have access to the underly-
    ing clusters in AWS Fargate. When you use the Fargate launch type, AWS provisions and
    manages underlying cluster instances for your containers. You do not need to manage main-
    tenance and patching. Thus, option B is correct.

  6. A. Option B is incorrect, as this is a matter of personal preference. Option C is also incor-
    rect because instances can be stopped and started individually, not only in layers at a
    time. Option D is incorrect because the configure lifecycle event runs on all instances in a

    stack, regardless of layer. Assigning recipes is performed at the layer level, meaning that all
    instances in the same layer will run the same configuration code. Organizing instances into
    layers based on purpose removes the need to add complex conditional logic. Thus, option A
    is correct.

  7. C. Option A is incorrect because AWS OpsWorks Stacks does not include a central Chef
    Server. Option B is incorrect because storing recipes as part of an AMI would introduce
    considerable complexity for regular recipe code updates. Option D is incorrect because
    Amazon EC2 is not a valid storage location for cookbooks. A custom cookbook repository
    location is configured for a stack. When instances in the stack are first launched, they will
    download cookbooks from this location and run them as part of lifecycle events. Thus,
    option C is correct.

  8. A. Option B is incorrect because you cannot associate a single Amazon RDS database
    instance with multiple stacks at the same time. Option C is incorrect because this approach
    would require manual snapshotting and data migration that is not necessary. Option D

    is incorrect. Migration of database instances between stacks is a common workflow. To
    migrate an Amazon RDS layer, you must remove it from the first layer before you add it to
    the second. Thus, option A is correct.

  9. C. Option A is incorrect because 24/7 instances are normally recommended for constant
    demand. Option B is incorrect because load-based instances are recommended for variable,
    unpredictable demand changes. Option D is incorrect because On-Demand is an Ama-

    zon ECS instance type, not an OpsWorks Stacks instance type. You configure time-based
    instances to start and stop on a specific schedule. AWS recommends this for a predictable
    increase in workload throughout a day. Thus, option C is correct.

  10. B. Option A is incorrect because 24/7 instances are normally recommended for constant
    demand. Option C is incorrect because time-based instances are recommended for changes
    in load that are predictable over time. Option D is incorrect because Spot is an Amazon
    ECS instance type, not an OpsWorks Stacks instance type. Option B is correct because
    load-based instances are recommended for unpredictable changes in demand.

  11. A. Option B is incorrect because the Amazon ECS service role is used to create and man-
    age AWS resources on behalf of the customer. Option C is incorrect because AWS Systems
    Manager is not part of Amazon ECS. Option D is incorrect because Amazon ECS auto-
    mates the process of stopping and starting containers within a cluster. The Amazon ECS
    agent is responsible for all on-instance tasks such as downloading container images and
    starting or stoping containers. Thus, option A is correct.

    Chapter 10: Authentication and Authorization 905


  12. B. Option A is incorrect. Though high availability is a tenet of SOA, it is not a requirement.
    Option C is incorrect because SOA does not define how development teams are organized.
    Option D is incorrect because SOA does not define what should or should not be procured
    from vendors. Service-oriented architecture involves using containers to implement discrete
    application components separately from one another to ensure availability and durability of
    each component. Thus, option B is correct.


  13. D. A single task definition can describe up to 10 containers to launch at a time. To launch
    more containers, you need to create multiple task definitions. Task definitions should group
    containers by similar purpose, lifecycle, or resource requirements. Thus, option D is correct.

  14. A. Option B is incorrect because PAT cannot be configured within your VPC (it must be
    configured using a proxy instance of some kind). Option C is incorrect because containers
    can be configured to bind to a random port instead of a specific one. Dynamic host port
    mapping allows you to launch multiple copies of the same container listening on different
    ports. Classic Load Balancers do not support dynamic host port mapping. Thus, option D
    is incorrect. Option A is correct because the Application Load Balancer is then responsible
    for mapping requests on one port to each container’s specific port.

  15. A. Options B and C are incorrect because they do not consider the Availability Zone of
    each cluster instance when placing tasks. Option D is incorrect because least cost is not a
    valid placement policy. The spread policy distributes tasks across multiple availability zones
    and cluster instances. Thus, option A is correct.


Chapter 10: Authentication and
Authorization

  1. D. You need to use a third-party IdP as the confirmation of identity. Based on that confir-
    mation, a policy can be assigned. Option A is incorrect because roles cannot be assigned to
    users outside of your account. Option B is incorrect because you cannot assign an IAM user
    ID to a user that is external to AWS. Option C is incorrect because it makes provisioning an
    identity a manual process.

  2. D. An identity provider (IdP) answers the question “Who are you?” Based on this answer,
    policies are assigned. Those policies control the level of access to the AWS infrastructure
    and applications (if using AWS for managed services).

    Option A is incorrect; it is one of the functions of a service provider—to control access to
    applications. Option B is incorrect; policies are used to control access to APIs, which is how
    access to the AWS infrastructure is controlled. Option C is incorrect; identity providers do
    no error checking on policy assignment.

  3. A. Where possible, using multi-factor authentication (MFA) minimizes the impact of lost
    or compromised credentials. Option B is incorrect in that embedding credentials is both
    a security risk and makes credential administration much more difficult. Option C would
    decrease the opportunity for misuse. It would not address any misuse that was a result of
    internal users. Option D is a good step but not as secure as option A.

    906 Appendix Answers to Review Questions


  4. D. If you want to use Security Assertion Markup Language (SAML) as an identity provider
    (IdP), use SAML 2.0. With Amazon Cognito, you can use Google (option A), Microsoft
    Active Directory (option B), and your own identity store (option C) as identity providers.

  5. C. By using AWS Cloud services, such as Amazon Cognito, you are able to view the API
    calls in AWS CloudTrail. Amazon CloudWatch Logs are generated if you are using Amazon
    Cognito to control access to AWS resources. Option A is incorrect as AWS can act as an
    IdP for non-AWS services. Option B is incorrect in that Amazon CloudWatch allows you to
    monitor the creation and modification of identity pools. It will not show activity. Option D
    is incorrect because the service provider assigns the policies, not the identity provider (IdP).

  6. A, C. AD Connecter is easy to set up, and you continue to use the existing AD console to
    do configuration changes on Active Directory. Option B is incorrect because you cannot
    connect to multiple Active Directory domains with AD Connector, only a single one. AD
    Connector requires a one-to-one relationship with your on-premises domains. You can
    use AD Connector for AWS-created applications and services. Option D is incorrect
    because AD Connector is used to support AWS services.

  7. A. To use AWS Single Sign-On (AWS SSO), you must set up AWS Organizations Service
    and enable all the features. AWS SSO uses Microsoft Active Directory (either AWS Man-
    aged Microsoft Active Directory or Active Directory Connector [AD Connector] but

    not Simple Active Directory). AWS SSO does not support Amazon Cognito. Option B is
    incorrect because AWS SSO does not use SAML. Options C and D are incorrect because
    you do not need to deploy either Simple AD or Amazon Cognito as a prerequisite for
    using AWS SSO.


  8. C. Option C is correct because GetFederationToken returns a set of temporary security
    credentials (consisting of an access key ID, a secret access key, and a security token) for

    a federated user. You call the GetFederationToken action using the long-term security
    credentials of an IAM user. This is appropriate in contexts where those credentials can
    be safely stored, usually in a server-based application. Option D is incorrect because
    GetSessionToken provides only temporary security credentials. Option A is incorrect

    because AssumeRole is shorter lived (the default is 60 minutes; can be extended to 720 min-
    utes). Options B and D are incorrect because
    GetUserToken and GetSessionToken are
    nonexistent APIs.

  9. B. Because it is a managed service, you are not able to access the Amazon EC2 instances
    directly running AWS Managed Microsoft AD. AWS Managed Microsoft AD provides for
    daily snapshots, monitoring, and the ability to sync with an existing on-premises Active
    Directory.

  10. A. Amazon Active Directory Connector (AD Connector) allows you to use your existing
    RADIUS-based multi-factor authentication (MFA) infrastructure to provide authentication.

    Chapter 11: Refactor to Microservices 907


    Chapter 11: Refactor to Microservices

    1. B. Option B is correct because a Parallel state enables you to execute several different
      execution paths at the same time in parallel. This is useful if you have activities or tasks
      that do not depend on each other and can execute in parallel. This can make your workflow
      complete faster. Option A is incorrect because it executes only one of the branches, not all.
      Option C is incorrect because it can execute one task, not multiple. Option D is incorrect
      because it waits and does not execute any tasks.

    2. B. The messages move to the dead-letter queue if they have met the Maximum Receives
      parameter (the number of times that a message can be received before being sent to a dead-
      letter queue) and have not been deleted.


    3. A. Amazon Simple Queue Service (Amazon SQS) attributes supports 256 KB messages.
      Refer to Table 11.2, Table 11.3, and Table 11.4.

    4. B. Option B is correct because to send a message larger than 256 KB, you use Amazon SQS
      to save the file in Amazon S3 and then send a link to the file on Amazon SQS. Option A

      is incorrect because using the technique in option B, this is possible. Option C is incorrect
      because AWS Lambda cannot push messages to Amazon SQS that exceed the size limit of
      256 KB. Option D is incorrect because it does not address the question.

    5. C. Option C is correct if you need to send messages to other users. Create an Amazon
      SQS queue and subscribe all the administrators to this queue. Configure an Amazon
      CloudWatch event to send a message on a daily cron schedule into the Amazon SQS queue.
      Option A is not correct because Amazon SQS queues do not support subscriptions. Option

      B is not correct because the message is sent without any status information. Option D is not
      correct because AWS Lambda does not allow sending outgoing email messages on port 22.
      Email servers use port 22 for outgoing messages. Port 22 is blocked on Lambda as an antis-
      pam measure.

    6. A. Amazon SNS supports the same attributes and parameters as Amazon SQS. Refer to
      Table 11.2, Table 11.3, and Table 11.4.

    7. D. Option D is correct because there is no limit on the number of consumers as long as
      they stay within the capacity of the stream, which is based on the number of shards. For a
      single shard, the capacity is 2 MB of read or five transactions per second. Options A and B
      are incorrect because there is no limit on the number of consumers that can consume from
      the stream. Option C is incorrect because together the consumers can consume only 2 MB
      per second or five transactions per second.

    8. C. Option C is correct because Amazon Kinesis Data Streams is a service for ingesting
      large amounts of data in real time and for performing real-time analytics on the data.
      Option A is not correct because you use Amazon SQS to ingest events, but it does not pro-
      vide a way to aggregate them in real time. Option B is incorrect because Amazon SNS is a
      notification service that does not support ingesting. Option D is incorrect because Amazon
      Kinesis Data Firehose provides analytics; however, it has a latency of at least 60 seconds.

      908 Appendix Answers to Review Questions


    9. A. Options B, C, and D are incorrect because there are no guarantees about where the
      records for Washington and Wyoming will be relative to each other. They could be on the
      same shard, or they could be on different shards. Option A is correct because the records
      for Washington will not be distributed across multiple shards.

    10. E. Option E is correct because all the options from A through D are correct. Options A, B,
      C, and D are all valid options for writing Amazon Kinesis Data Streams producers.


Chapter 12: Serverless Compute

  1. D. Option D is correct because it enables the company to keep their existing AWS Lambda
    functions intact and create new versions of the AWS Lambda function. When they are
    ready to update the Lambda function, they can assign the
    PROD alias to the new version.
    Option A is possible; however, this adds a lot of unnecessary work, because developers
    would have to update all of their code everywhere. Option B is incorrect because moving
    regions would require moving all other services or introducing latency into the architecture,
    which is not the best option. Option C is possible; however, creating new AWS accounts for
    each application version is not a best practice, and it complicates the organization of such
    accounts unnecessarily.


  2. B. At the time of this writing, the maximum amount of memory for a Lambda function is
    3008 MB.

  3. A. At the time of this writing, the default timeout value for a Lambda function is

    3 seconds. However, you can set this to as little as 1 second or as long as 300 seconds.


  4. C. Options A, B, and D are all viable answers; however, the question asks what is the best
    serverless option. Lambda is the only serverless option in this scenario; therefore, option C
    is the best answer.

  5. D. At the time of this writing, the maximum execution time for a Lambda function is 300
    seconds (5 minutes).

  6. A. At the time of this writing, Ruby is not supported for Lambda functions.

  7. A. At the time of this writing, the default limit for concurrent executions with Lambda is
    set to 1000. This is a soft limit that can be raised. To do this, you must open a case through
    the AWS Support Center page and send a Server Limit Increase request.

  8. C. There are two types of policies with Lambda: a function policy and an execution policy,
    or AWS role. A function policy defines which AWS resources are allowed to invoke your
    function. The execution role defines which AWS resources your function can access. Here,
    the function is invoked successfully, but the issue is that the Lambda function does not have
    access to process objects inside Amazon S3. Option A is not correct because a function
    policy is responsible for invoking or triggering the function; here, the function is invoked
    and executes properly. Option B is not correct, as the scenario states that the trust policy is
    valid. The execution policy or AWS role is responsible for providing Lambda with access to
    other services; thus, the correct answer is option C.

    Chapter 13: Serverless Applications 909


  9. A. Option A is correct because Lambda automatically retries failed executions for asyn-
    chronous invocations. You can also configure Lambda to forward payloads that were not
    processed to a DLQ, which can be an Amazon SQS queue or Amazon SNS topic. Option B
    is incorrect because a VPC network is an AWS service that allows you to define your own
    network in the AWS Cloud. Option C is incorrect because this is dealing with concurrency
    issues, and here you have no problems with Lambda concurrency. Additionally, concur-
    rency is enabled by default with Lambda. Option D is incorrect because Lambda does sup-
    port SQS.


  10. C. Option C is correct because the environment variables enable you to pass settings
    dynamically to your function code and libraries without changing your code. Option A is
    not correct, because dead-letter queries are used for events that could not be processed by
    Lambda and need to be investigated later. Option B is not correct because it can be done.
    Option D is incorrect because this can be accomplished through environment variables.


Chapter 13: Serverless Applications

  1. D. Option A is incorrect. While AWS CloudFormation can help you provision infrastruc-
    ture, AWS Serverless Application Model (AWS SAM) is optimized for deploying AWS
    serverless resources by making it easy to organize related components and resources that
    operate on a single stack; therefore, option A is not the best answer. Option C is incor-
    rect because AWS OpsWorks is managed by Puppet or Chef, which you can use to deploy
    infrastructure. However, these are not the optimal answers given that you are specifically
    looking for serverless technologies. The same is true for Ansible in option B. Option D is

    correct because AWS SAM is an open-source framework that you can use to build serverless
    applications on AWS.

  2. B. CORS is responsible for allowing cross-site access to your APIs. Without it, you will
    not be able to call the Amazon API Gateway service. You use a stage to deploy your API,
    and a resource is a typed object that is part of your API’s domain. Each resource may have
    an associated data model and relationships to other resources and can respond to different
    methods. Option A is incorrect because you do need to enable CORS. Option B is cor-
    rect because CORS is responsible for allowing one server to call another server or service.
    For more information on CORS, see:
    https://developer.mozilla.org/en-US/docs/
    Web/HTTP/CORS
    . Option C is incorrect, as deploying a stage allows you to deploy your
    API. Option D is incorrect, as a resource is where you can define your API, but it is not yet
    deployed to a stage and “live.”

  3. A, C. There are three benefits to serverless stacks: no server management, flexible scaling,
    and automated high availability. Costs vary case by case. For these reasons, option A and
    option C are the best answers.

  4. D. Option A is incorrect; API Gateway only supports HTTPS endpoints. Option B is
    incorrect because API Gateway does not support creating FTP endpoints. Option C
    is incorrect; API Gateway does not support SSH endpoints. API Gateway only creates
    HTTPS endpoints.

    910 Appendix Answers to Review Questions


  5. C. Option A is incorrect because Amazon CloudFront supports a variety of sources,
    including Amazon S3. Option B is incorrect, because serverless applications contain both
    static and dynamic data. Additionally, CloudFront supports both static and dynamic data.
    Option C is correct because CloudFront supports a variety of origins. For the serverless
    stack, it supports Amazon S3. Option D is incorrect because Amazon S3 is a valid origin
    for CloudFront.

  6. D. Option A, option B, and option C are each not the only language/platform supported.
    Option D is correct because all of these languages/platforms are supported.

  7. C. Option C is correct because Amazon Cognito supports SMS-based MFA.

  8. D. Options A, B, and C are incorrect because Amazon Cognito supports device tracking
    and remembering.


  9. A. Option A is correct because the events property allows you to assign Lambda to an
    event source. Option B is incorrect because
    handler is the function handler in an Lambda
    function. Option C is incorrect because
    context is the context object for a Lambda func-
    tion. Option D is incorrect because
    runtime is the language that your Lambda function
    runs as.


  10. D. Option A is incorrect. You can run React in an AWS service. Option B is incorrect. You
    can run your web server with Amazon S3. With option C, you do not need to load balance
    Lambda functions because Lambda scales automatically. Option D is correct. You can run
    a fully dynamic website in a serverless fashion. You can also use JavaScript frameworks
    such as Angular and React. The NoSQL database may need to be refactored to run in
    Amazon DynamoDB.


Chapter 14: Stateless Application
Patterns

  1. B. Option B is correct because the maximum size of an item in an DynamoDB table is
    400 KB. Option C is incorrect because 4 KB is the capacity of a strongly consistent read
    per second, or two eventually consistent reads per second, for an item up to 4 KB in size.
    Option D is incorrect because 1,024 KB is not the size limit of an DynamoDB item. The
    maximum item size is 400 KB.

  2. C. Option C is correct because when creating a new bucket, the bucket name must be glob-
    ally unique. Option A is incorrect because versioning is disabled by default. Option B is incor-
    rect because the maximum size for an object stored in Amazon S3 is 5 TB, not 5 GB. Option
    D is incorrect because you cannot change a bucket name after you have created the bucket.

  3. B. Option B is correct because storage class is the only factor that is not considered when
    determining which region to choose. Option A is incorrect because latency is a factor
    when choosing a bucket region. Option C is incorrect because prices are different between

    regions; thus, you might consider cost when choosing a bucket region. Option D is incorrect
    because you may be required to store your data in a bucket in a particular region based on
    legal requirements or compliance.

    Chapter 14: Stateless Application Patterns 911


  4. C. Option C is correct because the recommended technique for protecting your table data
    at rest is the server-side encryption. Option A is incorrect because fine-grained access con-
    trols are a mechanism for providing access to resources and API calls, but the mechanism is
    not used to encrypt or protect data at rest. Option B is incorrect because TLS protects data
    in transit, not data at rest. Option D is incorrect because client-side encryption is applied to
    data before it is transmitted from a user device to a server.

  5. D. Option D is correct because versioning-enabled buckets enable you to recover objects
    from accidental deletion or overwrite. Option A is incorrect because lifecycle policies are
    used to transition data to a different storage class and do not protect objects against acci-
    dental overwrites or deletions. Option B is incorrect because enabling
    MFA Delete on the
    bucket requires an additional method of authentication before allowing a deletion. Option
    C is incorrect because using a path-style URL is unrelated to protecting overwrites or acci-
    dental deletions.


  6. C, D. Options C and D are correct because Amazon S3 stores objects in buckets, and each
    object that is stored in a bucket is made up of two parts: the object itself and the metadata.
    Option A is incorrect because Amazon S3 stores data as objects, not in fixed blocks. Option
    B is incorrect because the size limit of an object is 5 TB.

  7. C. Option C is correct because DynamoDB Streams captures a time-ordered sequence of
    item-level modifications in any DynamoDB table, and the service stores this information in
    a log for up to 24 hours. Options A, B, and D are incorrect because

    24 hours is the maximum time that data persists on an Amazon DynamoDB stream.


  8. B. Option B is correct because DynamoDB Streams ensures that each stream record
    appears exactly once in the stream. Options A and C are incorrect because each stream
    record appears exactly once. Option D is incorrect because you cannot set the retention
    period.

  9. A. Option A is correct because your bucket can be in only one of three versioning states:
    versioning-enabled, versioning-disabled, or versioning-suspended. Thus, versioning-paused
    is a state that is not a valid configuration. Options A, B, and C are incorrect—they are all
    valid bucket states for versioning.


  10. A. Option A is correct because QueryTable is the DynamoDB operation used to find items
    based on primary key values. Option B is incorrect because
    UpdateTable is the DynamoDB
    operation used to modify the provisioned throughput settings, global secondary indexes,

    or DynamoDB Streams settings for a given table. Option C is incorrect because DynamoDB
    does not have a
    Search operation. Option D is incorrect because Scan is the DynamoDB
    operation used to read every item in a table.


  11. A, B, C. Option D is incorrect because when compared to the other options, a bank
    balance is not likely to be stored in a cache; it is probably not data that is retrieved as
    frequently as the others are fetched. Options A, B, and C are all better data candidates
    to cache because multiple users are more likely to access them repeatedly. Although, you

    could also cache the bank account balance for shorter periods if the database query is not
    performing well.

    912 Appendix Answers to Review Questions


  12. A, D. Options A and D are correct because Amazon ElastiCache supports both the Redis
    and Memcached open-source caching engines. Option B is incorrect because MySQL is
    not a caching engine—it is a relational database engine. Option C is incorrect because
    Couchbase is a NoSQL database and not one of the caching engines that ElastiCache
    supports.

  13. C. Option C is correct because the default limit is 20 nodes per cluster.

  14. C. Option C is correct because ElastiCache is a managed in-memory caching service.
    Option A is incorrect because the description aligns more closely to the Elasticsearch
    Service. Option B is incorrect because this is not an accurate description of the ElastiCache
    service. Option D is incorrect because, as a managed service, ElastiCache does not manage
    Amazon EC2 instances.

  15. B, D, E. Option B is correct because DynamoDB is a NoSQL low-latency transactional
    database that you can use to store state. Option D is correct because Amazon Elastic File
    System (Amazon EFS) is an elastic file system that you can also use to store state. Option E
    is correct because ElastiCache is an in-memory cache that is also a good solution for storing
    state. Option A is incorrect because Amazon CloudFront is a content delivery network that
    is used more for object caching, not in-memory caching. Option C is incorrect because
    Amazon CloudWatch is a metric repository and does not provide any kind of user-accessible
    storage. Option F is incorrect because Amazon SQS is used for exchanging messages.


  16. C. Option C is correct because Amazon DynamoDB is a nonrelational database that deliv-
    ers reliable performance at any scale. Option A is incorrect because Amazon S3 Glacier is
    for data archiving and long-term backup. It is also an object store and not a database store.
    Option B is incorrect because Amazon RDS is designed for relational workloads. Option D
    is incorrect because Amazon Redshift is a data warehousing service.

  17. D. Option D is correct because local secondary indexes on a table are created when the
    table is created. Options A and C are incorrect because you can have five local secondary
    indexes or five global secondary indexes per table. Option B is incorrect because you can
    create global secondary indexes after you have created the table.


Chapter 15: Monitoring and
Troubleshooting

  1. B. Option A is incorrect because you do not want to scale in to reduce your capacity when
    you are experiencing a high load. Option C is incorrect because you do not want to scale in
    to reduce your capacity when your application is taking a long time to respond. Option D
    is incorrect because metrics are required for triggering AWS Auto Scaling events. Option B
    is correct because scaling out should occur when more resources are being consumed than
    normal, and scaling in should occur when less resources are being consumed.

    Chapter 15: Monitoring and Troubleshooting 913


  2. D. Options A, B, C, and D are all incorrect because data points with a period of 300 sec-
    onds are stored for 63 days in Amazon CloudWatch.

  3. D. Option A is incorrect because AWS CloudTrail events show who made the request.
    Option B is incorrect because CloudTrail shows when the request was made, and option C
    is incorrect because CloudTrail shows what was requested. Option E is incorrect because
    CloudTrail shows what resource was acted on. Option D is correct because CloudTrail can
    provide no insight into why a request was made.

  4. C. Option A would work; however, it is not the most cost-effective way because logs stored
    in CloudWatch cost more than logs stored in Amazon S3. Option B is incorrect because
    CloudWatch cannot ingest logs without access to your servers. Option C is correct because
    archiving logs from CloudWatch to Amazon S3 reduces overall data storage costs.

  5. A, B, D. Option C is incorrect because CloudWatch has no way to access data in your
    applications or servers. You must push the data either by using the CloudWatch SDK or
    AWS CLI or by installing the CloudWatch agent. Option A is correct because the Cloud-
    Watch agent is required to send operating system and application logs to CloudWatch.
    Option B is likewise correct because metrics logs are sent to CloudWatch using the
    PutMetricData and PutLogEvents API actions. Option D is also correct because the
    AWS CLI can be used to send metrics to CloudWatch using the
    put-metric-data and
    put-log-events commands.

  6. C. Options A and B are incorrect because the strings must match a filter pattern equal to

    404. Option C is correct because 404 matches the error code present in the example logs.


  7. A. AWS X-Ray color-codes the response types you get from your services. For 4XX, or
    client-side errors, the circle is orange. Thus, option B is incorrect. Application failures
    or faults are red, and successful responses, or 2XX, are green. Thus, options C and D
    are incorrect. For throttling, or 5XX series errors, the circle is purple. Thus, option A is
    correct.

  8. C. Option A is incorrect because CloudTrail logs list security-related events and do not
    provide a dashboard feature. Option B is incorrect because CloudWatch alarms are used
    to notify you when something isn’t operating based on your specifications. Option D is
    incorrect because Amazon CloudWatch Logs are for sending and storing server logs to the
    CloudWatch service; however, you could use these logs to create a metric and then place

    it on the CloudWatch dashboard. Option C is the correct answer. Use CloudWatch dash-
    boards to create a single interface where you can monitor all the resources.

  9. D. CloudTrail stores the CloudTrail event history for 90 days; however, if you would like
    to store this information permanently, you can create an CloudTrail trail, which stores the
    logs in Amazon S3.

  10. D. Option C is incorrect because the LookupEvents API action can be used to query event
    data. Options A and B are also incorrect because the AWS CLI and the AWS Management
    Console use the same CloudTrail APIs to query event data. Thus, option D is correct.

    914 Appendix Answers to Review Questions


  11. B, D. Management events are operations performed on resources in your AWS account.
    Data events are operations performed on data stored in AWS resources. For example,
    modifying an object in Amazon S3 would qualify as a data event, and changing a bucket
    policy would qualify as a management event. Because options A, C, and E involve sending
    or receiving data, not modifying or creating AWS resources, they are data events. Thus,
    options B and D are correct.


  12. A, C, D. When installing the CloudWatch Logs agent, no additional networking con-
    figuration is required as long as your instance can reach the CloudWatch API endpoint.
    Therefore, option B is incorrect. You can use AWS Systems Manager to install and start the
    agent, but it is not required to install the Systems Manager agent alongside the CloudWatch
    Logs agent; thus, option E is incorrect. When installing the agent, you must configure the
    specific logs to send. The agent must be started before new log data is sent to CloudWatch
    Logs.

  13. A. CloudWatch alarms support triggering actions in Amazon EC2, EC2 Auto Scaling,
    and Amazon SNS. Thus, options B, C, and D are incorrect. It is possible to trigger AWS
    Lambda functions from an alarm, but only by first sending the alarm notification to an
    Amazon SNS topic. Thus, option A is correct.

  14. D. CPU, network, and disk activity are metrics that are visible to the underlying host for an
    instance. Thus, options A, B, and C are incorrect. Because memory is allocated in a single
    block to an instance and is managed by the guest OS, the underlying host does not have
    visibility into consumption. This metric would have to be delivered to CloudWatch as a
    custom metric by using the agent. Thus, option D is correct.

  15. A. No namespace starts with an Amazon prefix; therefore, options B and D are incorrect.
    Option C is incorrect because namespaces are specific to a service (Amazon EC2), not a
    resource (an instance). Option A is correct because the Amazon EC2 service uses the
    AWS
    prefix, followed by EC2.


Chapter 16: Optimization

  1. D. Amazon EC2 instance store is directly attached to the instance, which gives you the
    lowest latency between the disk and your application. Instance store is also provided at no
    additional cost on instance types that have it available, so this is the lowest-cost option.
    Additionally, because the data is being retrieved from somewhere else, it can be copied
    back to an instance as needed. Option A is incorrect because Amazon S3 cannot be directly
    mounted to an Amazon EC2 instance. Options B and C are incorrect because Amazon EBS
    and Amazon EFS would be higher-cost options, with a higher latency than an instance
    store.


  2. C. GetItem retrieves a single item from a table. This is the most efficient way to read a
    single item because it provides direct access to the physical location of the item. Options A
    and B are incorrect.
    Query retrieves all the items that have a specific partition key. Within
    those items, you can apply a condition to the sort key and retrieve only a subset of the

    Chapter 16: Optimization 915


    data. Query provides quick, efficient access to the partitions where the data is stored. Scan
    retrieves all of the items in the specified table, and it can consume large amounts of system
    resources based on the size of the table. Option D is incorrect. DynamoDB is a nonrela-
    tional NoSQL database, and it does not support table joins. Instead, applications read data
    from one table at a time.

  3. C. Option C is a fault-tolerance check. By launching instances in multiple Availability
    Zones in the same region, you help protect your applications from a single point of failure.
    Options A and B are performance checks. Provisioned IOPS volumes in the Amazon

    EBS are designed to deliver the expected performance only when they are attached to an
    Amazon EBS optimized instance. Some headers, such as Date or User-Agent, significantly
    reduce the cache hit ratio (the proportion of requests that are served from a CloudFront
    edge cache). This increases the load on your origin and reduces performance because
    CloudFront must forward more requests to your origin. Option D is a cost check. Elastic IP
    addresses are static IP addresses designed for dynamic cloud computing. A nominal charge
    is imposed for an Elastic IP address that is not associated with a running instance.


  4. B. Options A, C, and D are incorrect because partition keys used in these options could
    cause “hot” (heavily requested) partition keys because of lack of uniformity. Design your
    application for uniform activity across all logical partition keys in the table and its second-
    ary indexes. Use distinct values for each item.

  5. D. Option A is incorrect because SQS is a messaging service. Option B is incorrect because
    SNS is a notification service. Option C is incorrect because CloudFront is a web distribu-
    tion service. Option D is correct because ElastiCache improves the performance of your
    application by retrieving data from high throughput and low latency in-memory data
    stores. For details, see
    https://aws.amazon.com/elasticache.

  6. C. Option C is correct because CloudFront optimizes performance if your workload is
    mainly sending
    GET requests. There are also fewer direct requests to Amazon S3, which
    reduces cost. For details, see
    https://docs.aws.amazon.com/AmazonS3/latest/dev/
    request-rate-perf-considerations.html
    .

  7. D. Option A is incorrect because AWS Auto Scaling is optimal for unpredictable work-
    loads. Option B is incorrect because cross-region replication is better for disaster recovery
    scenarios. Option C is incorrect because DynamoDB streams are better suited to stream
    data to other sources. Option D is correct because Amazon DynamoDB Accelerator (DAX)
    provides fast in-memory performance. For details, see
    https://docs.aws.amazon.com/
    amazondynamodb/latest/developerguide/DAX.html
    .

  8. C. Option A is incorrect because EC2 instance store is too volatile to be optimal. Option
    B is incorrect because this is a security solution and will not impact performance positively.

    Option C is correct because ElastiCache is ideal for handling session state. You can abstract
    the HTTP sessions from the web servers by using Redis and Memcached. Option D is
    incorrect because compression is not the optimal solution given the choices. For details, see
    https://aws.amazon.com/caching/session-management/.

  9. B. Option B is correct because lazy loading only loads data into the cache when necessary.
    This avoids filling up the cache with data that isn’t requested. Options A, C, and D are

    916 Appendix Answers to Review Questions


    incorrect because they do not match the requirement of the question. For details, see
    https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/
    Strategies.html
    .

  10. A. Option A is correct because information about the instance, such as private IP, is stored
    in the instance metadata. Option B is incorrect because private IP information is not

    stored in the instance user data. Option C is incorrect because running ifconfig is manual
    and not automated. Option D is incorrect because it is not clear on what type of instance
    the application is running. For details, see
    https://docs.aws.amazon.com/AWSEC2/
    latest/UserGuide/ec2-instance-metadata.html
    .

  11. D. Options A, B, and C are incorrect because they are not recommended best practices.
    Option D is correct because it is one of the recommendations in the best practices docu-
    mentation, “Avoid using recursive code.” For details, see
    https://docs.aws.amazon.com/
    lambda/latest/dg/best-practices.html
    .

  12. C. Option A is incorrect because changing the entire architecture is not ideal. Option B is
    incorrect because Multi-AZ is used for fault tolerance. Option C is correct because loads
    can be reduced by routing read queries from your application to the read replica. Option D
    is incorrect because using an Elastic Load Balancing load balancer will not reduce the query
    load. For details, see
    https://aws.amazon.com/rds/details/read-replicas/.

  13. C. Option A is incorrect because this is relevant only when you need a static website.
    Option B is incorrect because changing the storage class does not help with latency. Option
    C is correct because cross-region replication maintains object copies in regions that are
    geographically closer to your users, reducing latency. Option D is incorrect because
    encryption is necessary only for securing data at rest. For details, see
    https://docs.aws

    .amazon.com/AmazonS3/latest/dev/crr.html.

  14. B. Options A, C, and D are incorrect because they are not optimal for handling large
    object uploads to Amazon S3. Option B is correct because a multipart upload enables
    you to upload large objects in parts to Amazon S3. For details, see
    https://docs

    .aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html.

  15. C. Option A is incorrect because this is not the optimal approach for bootstrapping. Option B
    is incorrect because, while possible, bootstrapping in the user data is optimal. Option C is
    correct because instance user data is used to perform common automated

    configuration tasks and run scripts after boot. Option D is incorrect because
    bootstrapping is done in instance user data, not instance metadata. For details, see
    https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html.


    Assessment Test

    1. Which of the following describes the cloud design principle of scalability?

      1. The ability to automatically increase available compute resources to meet growing user
        demand

      2. The ability to route incoming client requests between multiple application servers

      3. The ability to segment physical resources into multiple virtual partitions

      4. The ability to reduce production costs by spreading capital expenses across many
        accounts

    2. Which of the following best describes the cloud service model known as infrastructure as a
      service (IaaS)?

      1. End user access to software applications delivered over the internet

      2. Access to a simplified interface through which customers can directly deploy
        their application code without having to worry about managing the underlying
        infrastructure

      3. Customer rental of the use of measured units of a provider’s physical compute, storage,
        and networking resources

      4. Abstracted interfaces built to manage clusters of containerized workloads

    3. How does AWS ensure that no single customer consumes an unsustainable proportion of
      available resources?

      1. AWS allows customers to consume as much as they’re willing to pay for, regardless of
        general availability.

      2. AWS imposes default limits on the use of its service resources but allows customers to
        request higher limits.

      3. AWS imposes hard default limits on the use of its service resources.

      4. AWS imposes default limits on the use of its services by Basic account holders;
        Premium account holders face no limits.

    4. The AWS Free Tier is designed to give new account holders the opportunity to get to know
      how their services work without necessarily costing any money. How does it work?

      1. You get service credits that can be used to provision and launch a few typical
        workloads.

      2. You get full free access to a few core AWS services for one month.

      3. You get low-cost access to many core AWS services for three months.

      4. You get free lightweight access to many core AWS services for a full 12 months.

    5. AWS customers receive “production system down” support within one hour when they
      subscribe to which support plan(s)?

      1. Enterprise.

      2. Business and Enterprise.

      3. Developer and Basic.

      4. All plans get this level of support.

        xxviii Assessment Test


    6. AWS customers get full access to the AWS Trusted Advisor best practice checks when they
      subscribe to which support plan(s)?

      1. All plans get this level of support.

      2. Basic and Business.

      3. Business and Enterprise.

      4. Developer, Business, and Enterprise.

    7. The AWS Shared Responsibility Model illustrates how AWS itself (as opposed to its
      customers) is responsible for which aspects of the cloud environment?

      1. The redundancy and integrity of customer-added data

      2. The underlying integrity and security of AWS physical resources

      3. Data and configurations added by customers

      4. The operating systems run on EC2 instances

    8. Which of these is a designation for two or more AWS data centers within a single
      geographic area?

      1. Availability Zone

      2. Region

      3. Network subnet

      4. Geo-unit

    9. How, using security best practices, should your organization’s team members access your
      AWS account resources?

      1. Only a single team member should be given any account access.

      2. Through a jointly shared single account user who’s been given full account-wide
        permissions.

      3. Through the use of specially created users, groups, and roles, each given the fewest
        permissions necessary.

      4. Ideally, resource access should occur only through the use of access keys.

    10. Which of the following describes a methodology that protects your organization’s data
      when it’s on-site locally, in transit to AWS, and stored on AWS?

      1. Client-side encryption

      2. Server-side encryption

      3. Cryptographic transformation

      4. Encryption at rest

    11. What authentication method will you use to access your AWS resources remotely through
      the AWS Command Line Interface (CLI)?

      1. Strong password

      2. Multifactor authentication

      3. SSH key pairs

      4. Access keys

        Assessment Test xxix


    12. Which of these is the primary benefit from using resource tags with your AWS assets?

      1. Tags enable the use of remote administration operations via the AWS CLI.

      2. Tags make it easier to identify and administrate running resources in a busy AWS
        account.

      3. Tags enhance data security throughout your account.

      4. Some AWS services won’t work without the use of resource tags.

    13. What defines the base operating system and software stack that will be available for a new
      Elastic Compute Cloud (EC2) instance when it launches?

      1. The Virtual Private Cloud (VPC) into which you choose to launch your instance.

      2. The instance type you select.

      3. The Amazon Machine Image (AMI) you select.

      4. You don’t need to define the base OS—you can install that once the instance launches.

    14. Which of the following AWS compute services offers an administration experience that
      most closely resembles the way you would run physical servers in your own local data
      center?

      1. Simple Storage Service (S3)

      2. Elastic Container Service (ECS)

      3. Elastic Compute Cloud (EC2)

      4. Lambda

    15. Which of the following AWS object storage services offers the lowest ongoing charges, but
      at the cost of some convenience?

      1. Glacier

      2. Storage Gateway

      3. Simple Storage Service (S3)

      4. Elastic Block Store (EBS)

    16. Which of the following AWS storage services can make the most practical sense for
      petabyte-sized archives that currently exist in your local data center?

      1. Saving to a Glacier Vault

      2. Saving to a Simple Storage Service (S3) bucket

      3. Saving to an Elastic Block Store (EBS) volume

      4. Saving to an AWS Snowball device

    17. Which of the following will provide the most reliable and scalable relational database
      experience on AWS?

      1. Relational Database Service (RDS)

      2. Running a database on an EC2 instance

      3. DynamoDB

      4. Redshift

        xxx Assessment Test


    18. What’s the best and simplest way to increase reliability of an RDS database instance?

      1. Increase the available IOPS.

      2. Choose the Aurora database engine when you configure your instance.

      3. Enable Multi-AZ.

      4. Duplicate the database in a second AWS Region.

    19. How does AWS describe an isolated networking environment into which you can launch
      compute resources while closely controlling network access?

      1. Security group

      2. Virtual private cloud (VPC)

      3. Availability Zone

      4. Internet gateway

    20. What service does AWS use to provide a content delivery network (CDN) for its customers?

      1. VPC peering

      2. Internet gateway

      3. Route 53

      4. CloudFront

    21. What is Amazon’s Git-compliant version control service for integrating your source code
      with AWS resources?

      1. CodeCommit

      2. CodeBuild

      3. CodeDeploy

      4. Cloud9

    22. Which AWS service allows you to build a script-like template representing complex resource
      stacks that can be used to launch precisely defined environments involving the full range of
      AWS resources?

      1. LightSail

      2. EC2

      3. CodeDeploy

      4. CloudFormation

    23. What is Amazon Athena?

      1. A service that permits queries against data stored in Amazon S3

      2. A service that permits processing and analyzing of real-time video and data streams

      3. A NoSQL database engine

      4. A Greece-based Amazon Direct Connect service partner

        Assessment Test xxxi


    24. What is Amazon Kinesis?

      1. A service that permits queries against data stored in Amazon S3

      2. A service that permits processing and analyzing of real-time video and data streams

      3. A NoSQL database engine

      4. A Greece-based Amazon Direct Connect service partner

    25. What is Amazon Cognito?

      1. A service that can manage authentication and authorization for your public-facing
        applications

      2. A service that automates the administration of authentication secrets used by your
        AWS resources

      3. A service that permits processing and analyzing of real-time video and data streams

      4. A relational database engine


Answers to Assessment Test

  1. A. A scalable deployment will automatically “scale up” its capacity to meet growing user
    demand without the need for manual interference. See Chapter 1.

  2. C. IaaS is a model that gives customers access to virtualized units of a provider’s physical
    resources. IaaS customers manage their infrastructure much the way they would local,
    physical servers. See Chapter 1.

  3. B. AWS applies usage limits on most features of its services. However, in many cases, you
    can apply for a limit to be lifted. See Chapter 2.

  4. D. The Free Tier offers you free lightweight access to many core AWS services for a full
    12 months. See Chapter 2.

  5. B. “Production system down” support within one hour is available only to subscribers to
    the Business or Enterprise support plans. See Chapter 3.

  6. D. All support plans come with full access to Trusted Advisor except for the (free) Basic
    plan. See Chapter 3.

  7. B. According to the Shared Responsibility Model, AWS is responsible for the underlying
    integrity and security of AWS physical resources, but not the integrity of the data and
    configurations added by customers. See Chapter 4.

  8. A. An Availability Zone is one of two or more physical data centers located within a single
    AWS Region. See Chapter 4.

  9. C. Team members should each be given identities (as users, groups, and/or roles) configured
    with exactly the permissions necessary to do their jobs and no more. See Chapter 5.

  10. A. End-to-end encryption that protects data at every step of its life cycle is called client-side
    encryption. See Chapter 5.

  11. D. AWS CLI requests are authenticated through access keys. See Chapter 6.

  12. B. Resource tags—especially when applied with consistent naming patterns—can make it
    easier to visualize and administrate resources on busy accounts. See Chapter 6.

  13. C. The AMI you select while configuring your new instance defines the base OS. See
    Chapter 7.

  14. C. You can administrate EC2 instances using techniques that are similar to the way you’d
    work with physical servers. See Chapter 7.

  15. A. Amazon Glacier can reliably store large amounts of data for a very low price but
    requires CLI or SDK administration access, and retrieving your data can take hours. See
    Chapter 8.

    Answers to Assessment Test xxxiii


  16. D. You can transfer large data stores to the AWS cloud (to S3 buckets) by having Amazon
    send you a Snowball device to which you copy your data and which you then ship back to
    Amazon. See Chapter 8.

  17. A. RDS offers a managed and highly scalable database environment for most popular
    relational database engines (including MySQL, MariaDB, and Oracle). See Chapter 9.

  18. C. Multi-AZ will automatically replicate your database in a second Availability Zone for
    greater reliability. It will, of course, also double your costs. See Chapter 9.

  19. B. A VPC is an isolated networking environment into which you can launch compute
    resources while closely controlling network access. See Chapter 10.

  20. D. CloudFront is a content delivery network (CDN) that distributes content through its
    global network of edge locations. See Chapter 10.

  21. A. CodeCommit is a Git-compliant version control service for integrating your source code
    with AWS resources. See Chapter 11.

  22. D. CloudFormation templates can represent complex resource stacks that can be used
    to launch precisely defined environments involving the full range of AWS resources. See
    Chapter 11.

  23. A. Amazon Athena is a managed service that permits queries against S3-stored data. See
    Chapter 13.

  24. B. Amazon Kinesis allows processing and analyzing of real time video and data streams.
    See Chapter 13.

  25. A. Amazon Cognito can manage authentication and authorization for your public-facing
    applications. See Chapter 13.

10 Chapter 1 The Cloud


Review Questions

  1. Which of the following does not contribute significantly to the operational value of a large
    cloud provider like AWS?

    1. Multiregional presence

    2. Highly experienced teams of security engineers

    3. Deep experience in the retail sphere

    4. Metered, pay-per-use pricing

  2. Which of the following are signs of a highly available application? (Select TWO.)

    1. A failure in one geographic region will trigger an automatic failover to resources in a
      different region.

    2. Applications are protected behind multiple layers of security.

    3. Virtualized hypervisor-driven systems are deployed as mandated by company policy.

    4. Spikes in user demand are met through automatically increasing resources.

  3. How does the metered payment model make many benefits of cloud computing possible?
    (Select TWO.)

    1. Greater application security is now possible.

    2. Experiments with multiple configuration options are now cost-effective.

    3. Applications are now highly scalable.

    4. Full-stack applications are possible without the need to invest in capital expenses.

  4. Which of the following are direct benefits of server virtualization? (Select TWO.)

    1. Fast resource provisioning and launching

    2. Efficient (high-density) use of resources

    3. Greater application security

    4. Elastic application designs

  5. What is a hypervisor?

    1. Hardware device used to provide an interface between storage and compute modules

    2. Hardware device used to provide an interface between networking and compute
      modules

    3. Software used to log and monitor virtualized operations

    4. Software used to administrate virtualized resources run on physical infrastructure

      Review Questions 11


  6. Which of the following best describes server virtualization?

    1. “Sharding” data from multiple sources into a single virtual data store

    2. Logically partitioning physical compute and storage devices into multiple smaller
      virtual devices

    3. Aggregating physical resources spread over multiple physical devices into a single
      virtual device

    4. Abstracting the complexity of physical infrastructure behind a simple web interface

  7. Which of the following best describes Infrastructure as a Service products?

    1. Services that hide infrastructure complexity behind a simple interface

    2. Services that provide a service to end users through a public network

    3. Services that give you direct control over underlying compute and storage resources

    4. Platforms that allow developers to run their code over short periods on cloud servers

  8. Which of the following best describes Platform as a Service products?

    1. Services that hide infrastructure complexity behind a simple interface

    2. Platforms that allow developers to run their code over short periods on cloud servers

    3. Services that give you direct control over underlying compute and storage resources

    4. Services that provide a service to end users through a public network

  9. Which of the following best describes Software as a Service products?

    1. Services that give you direct control over underlying compute and storage resources

    2. Services that provide a service to end users through a public network

    3. Services that hide infrastructure complexity behind a simple interface

    4. Platforms that allow developers to run their code over short periods on cloud servers

  10. Which of the following best describes scalability?

    1. The ability of an application to automatically add preconfigured compute resources to
      meet increasing demand

    2. The ability of an application to increase or decrease compute resources to match
      changing demand

    3. The ability to more densely pack virtualized resources onto a single physical server

    4. The ability to bill resource usage using a pay-per-user model

      12 Chapter 1 The Cloud


  11. Which of the following best describes elasticity?

    1. The ability to more densely pack virtualized resources onto a single physical server

    2. The ability to bill resource usage using a pay-per-user model

    3. The ability of an application to increase or decrease compute resources to match
      changing demand

    4. The ability of an application to automatically add preconfigured compute resources to
      meet increasing demand

  12. Which of the following characteristics most help AWS provide such scalable services?
    (Select TWO.)

    1. The enormous number of servers it operates

    2. The value of its capitalized assets

    3. Its geographic reach

    4. Its highly automated infrastructure administration systems

28 Chapter 2 Understanding Your AWS Account


Review Questions

  1. Which of the following EC2 services can be used without charge under the Free Tier?

    1. Any single EC2 instance type as long as it runs for less than one hour per day

    2. Any single EC2 instance type as long as it runs for less than 75 hours per month

    3. A single t2.micro EC2 instance type instance for 750 hours per month

    4. t2.micro EC2 instance type instances for a total of 750 hours per month

  2. You want to experiment with deploying a web server on an EC2 instance. Which two of
    the following resources can you include to make that work while remaining within the Free
    Tier? (Select TWO.)

    1. A 5 GB bucket on S3

    2. A t2.micro instance type EC2 instance

    3. A 30 GB solid-state Elastic Block Store (EBS) drive

    4. Two 20 GB solid-state Elastic Block Store (EBS) drives

  3. Which of the following usage will always be cost-free even after your account’s Free Tier
    has expired? (Select TWO.)

    1. One million API calls/month on Amazon API Gateway

    2. 10 GB of data retrievals from Amazon Glacier per month

    3. 500 MB/month of free storage on the Amazon Elastic Container Registry (ECR)

    4. 10 custom monitoring metrics and 10 alarms on Amazon CloudWatch

  4. Which of the following tools are available to ensure you won’t accidentally run past your
    Free Tier limit and incur unwanted costs? (Select TWO.)

    1. Automated email alerts when activity approaches the Free Tier limits

    2. The Top Free Tier Services by Usage section on the Billing & Cost Management
      Dashboard

    3. Billing & Cost Management section on the Top Free Tier Services Dashboard

    4. The Billing Preferences Dashboard

  5. Which of the following is likely to be an accurate source of AWS pricing information?

    1. Wikipedia pages relating to a particular service

    2. The AWS Command Line Interface (AWS CLI)

    3. AWS online documentation relating to a particular service

    4. The AWS Total Cost of Ownership Calculator

  6. Which of the following will probably not affect the pricing for an AWS service?

    1. Requests for raising the available service limit

    2. AWS Region

    3. The volume of data saved to an S3 bucket

    4. The volume of data egress from an Amazon Glacier vault

      Review Questions 29


  7. Which of the following is a limitation of the AWS Simple Monthly Calculator?

    1. You can calculate resource use for only one service at a time.

    2. Not all AWS services are included.

    3. The pricing is seldom updated and doesn’t accurately reflect current pricing.

    4. You’re not able to specify specific configuration parameters.

  8. Which of the following Simple Monthly Calculator selections will likely have an impact on
    most other configuration choices on the page? (Select TWO.)

    1. Calculate By Month Or Year

    2. Include Multiple Organizations

    3. Free Usage Tier

    4. Choose Region

  9. Which of the following is not an included parameter in the AWS Total Cost of Ownership
    Calculator?

    1. The tax implications of a cloud deployment

    2. Labor costs of an on-premises deployment

    3. Networking costs of an on-premises deployment

    4. Electricity costs of an on-premises deployment

  10. Which of the following AWS Total Cost of Ownership Calculator parameters is likely to
    have the greatest impact on cost?

    1. Currency

    2. AWS Region

    3. Guest OS

    4. Number of servers

  11. Which of the following AWS documentation URLs points to the page containing an up-to-
    date list of service limits?

    1. https://docs.aws.amazon.com/general/latest/gr/limits.html

    2. https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html

    3. https://aws.amazon.com/general/latest/gr/aws_service_limits.html

    4. https://docs.aws.amazon.com/latest/gr/aws_service_limits.html

  12. Which of the following best describes one possible reason for AWS service limits?

    1. To prevent individual customers from accidentally launching a crippling level of
      resource consumption

    2. To more equally distribute available resources between customers from different
      regions

    3. To allow customers to more gradually increase their deployments

    4. Because there are logical limits to the ability of AWS resources to scale upward

      30 Chapter 2 Understanding Your AWS Account


  13. Is it always possible to request service limit increases from AWS?

    1. Yes. All service limits can be increased.

    2. No. A limit can never be increased.

    3. Service limits are defaults. They can be increased or decreased on demand.

    4. No. Some service limits are hard.

  14. Which is the best place to get a quick summary of this month’s spend for your account?

    1. Budgets

    2. Cost Explorer

    3. Cost and usage reports

    4. Billing & Cost Management Dashboard

  15. What is the main goal for creating a Usage budget type (in AWS Budgets)?

    1. To correlate usage per unit cost to understand your account cost efficiency

    2. To track the status of any active reserved instances on your account

    3. To track particular categories of resource consumption

    4. To monitor costs being incurred against your account

  16. Which of the following is not a setting you can configure in a Cost budget?

    1. Period (monthly, quarterly, etc.)

    2. Instance type

    3. Start and stop dates

    4. Owner (username of resource owner)

  17. What is the main difference between the goals of Cost Explorer and of cost and usage
    reports?

    1. Cost Explorer displays visualizations of high-level historical and current account costs,
      while cost and usage reports generate granular usage reports in CSV format.

    2. Cost and usage reports display visualizations of high-level historical and current
      account costs, while Cost Explorer generates granular usage reports in CSV format.

    3. Cost Explorer lets you set alerts that are triggered by billing events, while cost and
      usage reports help you visualize system events.

    4. Cost and usage reports are meant to alert you to malicious intrusions, while Cost
      Explorer displays visualizations of high-level historical and current account costs.

  18. What is the purpose of cost allocation tags?

    1. To associate spend limits to automatically trigger resource shutdowns when necessary

    2. To help you identify the purpose and owner of a particular running resource to better
      understand and control deployments

    3. To help you identify resources for the purpose of tracking your account spending

    4. To visually associate account events with billing periods

      Review Questions 31


  19. Which of the following scenarios would be a good use case for AWS Organizations?
    (Select TWO.)

    1. A single company with multiple AWS accounts that wants a single place to
      administrate everything

    2. An organization that provides AWS access to large teams of its developers and admins

    3. A company that’s integrated some operations with an upstream vendor

    4. A company with two distinct operational units, each with its own accounting system
      and AWS account

  20. Which of these tools lets you design graphs within the browser interface to track your
    account spending?

    1. Budgets

    2. Cost Explorer

    3. Reports

    4. Consolidating Billing

44 Chapter 3 Getting Support on AWS


Review Questions

  1. Your company is planning a major deployment on AWS. While the design and testing stages
    are still in progress, which of the following plans will provide the best blend of support and
    cost savings?

    1. Basic

    2. Developer

    3. Business

    4. Enterprise

  2. Your web development team is actively gearing up for a deployment of an ecommerce site.
    During these early stages of the process, individual developers are running into frustrating
    conflicts and configuration problems that are highly specific to your situation. Which of the
    following plans will provide the best blend of support and cost savings?

    1. Basic

    2. Developer

    3. Business

    4. Enterprise

  3. Your corporate website was offline last week for more than two hours—which caused
    serious consequences, including the early retirement of your CTO. Your engineers have
    been having a lot of trouble tracking down the source of the outage and admit that they
    need outside help. Which of the following will most likely meet that need?

    1. Basic

    2. Developer

    3. Business

    4. Enterprise

  4. For which of the following will AWS provide direct 24/7 support to all users—even those
    on the Basic Support plan?

    1. Help with infrastructure under a massive denial-of-service (DoS) attack

    2. Help with failed and unavailable infrastructure

    3. Help with making a bill payment to AWS

    4. Help with accessing your infrastructure via the AWS CLI

  5. The primary purpose of an AWS technical account manager is to:

    1. Provide 24/7 customer service for your AWS account

    2. Provide deployment guidance and advocacy for Enterprise Support customers

      Review Questions 45


    3. Provide deployment guidance and advocacy for Business Support customers

    4. Provide strategic cost estimates for Enterprise Support customers

  6. Your Linux-based EC2 instance requires a patch to a Linux kernel module. The problem
    is that patching the module will, for some reason, break the connection between your
    instance and data in an S3 bucket. Your team doesn’t know if it’s possible to work
    around this problem. Which is the most cost-effective AWS plan through which support
    professionals will try to help you?

    1. Developer.

    2. Business.

    3. Enterprise.

    4. No plan covers this kind of support.

  7. Your company enrolled in the Developer Support plan and, through the course of one
    month, consumed $4,000 USD of AWS services. How much will the support plan cost the
    company for the month?

    A. $120

    B. $29

    C. $100

    D. $480

  8. Your company enrolled in the Business Support plan and, through the course of three
    months, consumed $33,000 of AWS services (the consumption was equally divided
    across the months). How much will the support plan cost the company for the full
    three months?

    A. $4,000

    B. $100

    C. $1,100

    D. $2,310

  9. Which of the following AWS support services does not offer free documentation of
    some sort?

    1. AWS Professional Services

    2. The Basic Support plan

    3. AWS Partner Network

    4. The Knowledge Center

      46 Chapter 3 Getting Support on AWS


  10. What is the key difference between the roles of AWS Professional Services and a technical
    account manager (TAM)?

    1. The Professional Services product helps AWS Partner Network cloud professionals
      work alongside your own team to help you administrate your cloud infrastructure. The
      TAM is a cloud professional employed by AWS to guide you through the planning and
      execution of your infrastructure.

    2. The TAM is a cloud professional employed by AWS to guide you through the planning
      and execution of your infrastructure. The Professional Services product provides cloud
      professionals to work alongside your own team to help you administrate your cloud
      infrastructure.

    3. The TAM is a member of your team designated as the point person for all AWS
      projects. The Professional Services product provides consultants to work alongside
      your own team to help you administrate your cloud infrastructure.

    4. The Professional Services product is a network appliance that AWS installs in your
      data center to test cloud-bound workloads for compliance with best practices. The
      TAM is a cloud professional employed by AWS to guide you through the planning and
      execution of your infrastructure.

  11. AWS documentation is available in a number of formats, including which of the following?
    (Select TWO.)

    1. Microsoft Word (DOC)

    2. Kindle

    3. HTML

    4. DocBook

  12. Which of the following documentation sites are most likely to contain code snippets for you
    to cut and (after making sure you understand exactly what they’ll do) paste into your AWS
    operations? (Select TWO.)

    1. https://aws.amazon.com/premiumsupport/knowledge-center

    2. https://aws.amazon.com/premiumsupport/compare-plans

    3. https://docs.aws.amazon.com

    4. https://aws.amazon.com/professional-services

  13. What is the primary function of the content linked from the Knowledge Center?

    1. To introduce new users to the functionality of the core AWS services

    2. To explain how AWS deployments can be more efficient and secure than on-premises

    3. To provide a public forum where AWS users can ask their technical questions

    4. To present solutions to commonly encountered technical problems using AWS
      infrastructure

  14. On which of the following sites are you most likely to find information about encrypting
    your AWS resources?

    1. https://aws.amazon.com/premiumsupport/knowledge-center

    2. https://aws.amazon.com/security/security-resources

      Review Questions 47


    3. https://docs.aws.amazon.com

    4. https://aws.amazon.com/security/encryption

  15. When using AWS documentation pages, what is the best way to be sure the information
    you’re reading is up-to-date?

    1. The page URL will include the word latest.

    2. The page URL will include the version number (i.e., 3.2).

    3. The page will have the word Current at the top right.

    4. There is no easy way to tell.

  16. Which of the following is not a Trusted Advisor category?

    1. Performance

    2. Service Limits

    3. Replication

    4. Fault Tolerance

  17. “Data volumes that aren’t properly backed up” is an example of which of these Trusted
    Advisor categories?

    1. Fault Tolerance

    2. Performance

    3. Security

    4. Cost Optimization

  18. Instances that are running (mostly) idle should be identified by which of these Trusted
    Advisor categories?

    1. Performance

    2. Cost Optimization

    3. Service Limits

    4. Replication

  19. Within the context of Trusted Advisor, what is a false positive?

    1. An alert for a service state that was actually intentional

    2. A green OK icon for a service state that is failed or failing

    3. A single status icon indicating that your account is completely compliant

    4. Textual indication of a failed state

  20. Which of the following Trusted Advisor alerts is available only for accounts on the Business
    or Enterprise Support plan? (Select TWO.)

    1. MFA on Root Account

    2. Load Balancer Optimization

    3. Service Limits

    4. IAM Access Key Rotation

Review Questions 63


Review Questions

  1. Which of the following designations would refer to the AWS US West (Oregon) region?

    1. us-east-1

    2. us-west-2

    3. us-west-2a

    4. us-west-2b

  2. Which of the following is an AWS Region for which customer access is restricted?

    1. AWS Admin

    2. US-DOD

    3. Asia Pacific (Tokyo)

    4. AWS GovCloud

  3. When you request a new virtual machine instance in EC2, your instance will automatically
    launch into the currently selected value of which of the following?

    1. Service

    2. Subnet

    3. Availability Zone

    4. Region

  4. Which of the following are not globally based AWS services? (Select TWO.)

    1. RDS

    2. Route 53

    3. EC2

    4. CloudFront

  5. Which of the following would be a valid endpoint your developers could use to access a
    particular Relational Database Service instance you’re running in the Northern Virginia
    region?

    1. us-east-1.amazonaws.com.rds

    2. ecs.eu-west-3.amazonaws.com

    3. rds.us-east-1.amazonaws.com

    4. rds.amazonaws.com.us-east-1

  6. What are the most significant architectural benefits of the way AWS designed its regions?
    (Select TWO.)

    1. It can make infrastructure more fault tolerant.

    2. It can make applications available to end users with lower latency.

    3. It can make applications more compliant with local regulations.

    4. It can bring down the price of running.

      64 Chapter 4 Understanding the AWS Environment


  7. Why is it that most AWS resources are tied to a single region?

    1. Because those resources are run on a physical device, and that device must live
      somewhere

    2. Because security considerations are best served by restricting access to a single physical
      location

    3. Because access to any one digital resource must always occur through a single physical
      gateway

    4. Because spreading them too far afield would introduce latency issues

  8. You want to improve the resilience of your EC2 web server. Which of the following is the
    most effective and efficient approach?

    1. Launch parallel, load-balanced instances in multiple AWS Regions.

    2. Launch parallel, load-balanced instances in multiple Availability Zones within a single
      AWS Region.

    3. Launch parallel, autoscaled instances in multiple AWS Regions.

    4. Launch parallel, autoscaled instances in multiple Availability Zones within a single
      AWS Region.

  9. Which of the following is the most accurate description of an AWS Availability Zone?

    1. One or more independently powered data centers running a wide range of hardware
      host types

    2. One or more independently powered data centers running a uniform hardware host
      type

    3. All the data centers located within a broad geographic area

    4. The infrastructure running within a single physical data center

  10. Which of the following most accurately describes a subnet within the AWS ecosystem?

    1. The virtual limits imposed on the network access permitted to a resource instance

    2. The block of IP addresses assigned for use within a single region

    3. The block of IP addresses assigned for use within a single Availability Zone

    4. The networking hardware used within a single Availability Zone

  11. What determines the order by which subnets/AZ options are displayed in EC2 configura-
    tion dialogs?

    1. Alphabetical order

    2. They (appear) to be displayed in random order.

    3. Numerical order

    4. By order of capacity, with largest capacity first

      Review Questions 65


  12. What is the primary goal of autoscaling?

    1. To ensure the long-term reliability of a particular physical resource

    2. To ensure the long-term reliability of a particular virtual resource

    3. To orchestrate the use of multiple parallel resources to direct incoming user requests

    4. To ensure that a predefined service level is maintained regardless of external demand
      or instance failures

  13. Which of the following design strategies is most effective for maintaining the reliability of a
    cloud application?

    1. Resource isolation

    2. Resource automation

    3. Resource redundancy

    4. Resource geolocation

  14. Which of the following AWS services are not likely to benefit from Amazon edge locations?
    (Select TWO.)

    1. RDS

    2. EC2 load balancers

    3. Elastic Block Store (EBS)

    4. CloudFront

  15. Which of the following is the primary benefit of using CloudFront distributions?

    1. Automated protection from mass email campaigns

    2. Greater availability through redundancy

    3. Greater security through data encryption

    4. Reduced latency access to your content no matter where your end users live

  16. What is the main purpose of Amazon Route 53?

    1. Countering the threat of distributed denial-of-service (DDoS) attacks

    2. Managing domain name registration and traffic routing

    3. Protecting web applications from web-based threats

    4. Using the serverless power of Lambda to customize CloudFront behavior

  17. According to the AWS Shared Responsibility Model, which of the following are responsi-
    bilities of AWS? (Select TWO.)

    1. The security of the cloud

    2. Patching underlying virtualization software running in AWS data centers

    3. Security of what’s in the cloud

    4. Patching OSs running on EC2 instances

      66 Chapter 4 Understanding the AWS Environment


  18. According to the AWS Shared Responsibility Model, what’s the best way to define the sta-
    tus of the software driving an AWS managed service?

    1. Everything associated with an AWS managed service is the responsibility of AWS.

    2. Whatever is added by the customer (like application code) is the customer’s
      responsibility.

    3. Whatever the customer can control (application code and/or configuration settings) is
      the customer’s responsibility.

    4. Everything associated with an AWS managed service is the responsibility of the
      customer.

  19. Which of the following is one of the first places you should look when troubleshooting a
    failing application?

    1. AWS Acceptable Use Monitor

    2. Service Status Dashboard

    3. AWS Billing Dashboard

    4. Service Health Dashboard

  20. Where will you find information on the limits AWS imposes on the ways you can use your
    account resources?

    1. AWS User Agreement Policy

    2. AWS Acceptable Use Policy

    3. AWS Acceptable Use Monitor

    4. AWS Acceptable Use Dashboard

78 Chapter 5 Securing Your AWS Resources


Review Questions

  1. What is the primary function of the AWS IAM service?

    1. Identity and access management

    2. Access key management

    3. SSH key pair management

    4. Federated access management

  2. Which of the following are requirements you can include in an IAM password policy?
    (Select THREE.)

    1. Require at least one uppercase letter.

    2. Require at least one number.

    3. Require at least one space or null character.

    4. Require at least one nonalphanumeric character.

  3. Which of the following should you do to secure your AWS root user? (Select TWO.)

    1. Assign the root user to the “admins” IAM group.

    2. Use the root user for day-to-day administration tasks.

    3. Enable MFA.

    4. Create a strong password.

  4. How does multi-factor authentication work?

    1. Instead of an access password, users authenticate via a physical MFA device.

    2. In addition to an access password, users also authenticate via a physical MFA device.

    3. Users authenticate using tokens sent to at least two MFA devices.

    4. Users authenticate using a password and also either a physical or virtual MFA device.

  5. Which of the following SSH commands will successfully connect to an EC2 Amazon Linux
    instance with an IP address of 54.7.35.103 using a key named
    mykey.pem?

    1. echo "mykey.pem ubuntu@54.7.35.103" | ssh -i

    2. ssh -i mykey.pem ec2-user@54.7.35.103

    3. ssh -i mykey.pem@54.7.35.103

    4. ssh ec2-user@mykey.pem:54.7.35.103 -i

  6. What’s the most efficient method for managing permissions for multiple IAM users?

    1. Assign users requiring similar permissions to IAM roles.

    2. Assign users requiring similar permissions to IAM groups.

    3. Assign IAM users permissions common to others with similar administration
      responsibilities.

    4. Create roles based on IAM policies, and assign them to IAM users.

      Review Questions 79


  7. What is an IAM role?

    1. A set of permissions allowing access to specified AWS resources

    2. A set of IAM users given permission to access specified AWS resources

    3. Permissions granted a trusted entity over specified AWS resources

    4. Permissions granted an IAM user over specified AWS resources

  8. How can federated identities be incorporated into AWS workflows? (Select TWO.)

    1. You can provide users authenticated through a third-party identity provider access to
      backend resources used by your mobile app.

    2. You can use identities to guide your infrastructure design decisions.

    3. You can use authenticated identities to import external data (like email records from
      Gmail) into AWS databases.

    4. You can provide admins authenticated through AWS Microsoft AD with access to a
      Microsoft SharePoint farm running on AWS.

  9. Which of the following are valid third-party federated identity standards? (Select TWO.)

    1. Secure Shell

    2. SSO

    3. SAML 2.0

    4. Active Directory

  10. What information does the IAM credential report provide?

    1. A record of API requests against your account resources

    2. A record of failed password account login attempts

    3. The current state of your account security settings

    4. The current state of security of your IAM users’ access credentials

  11. What text format does the credential report use?

    1. JSON

    2. CSV

    3. ASCII

    4. XML

  12. Which of the following IAM policies is the best choice for the admin user you create in
    order to replace the root user for day-to-day administration tasks?

    1. AdministratorAccess

    2. AmazonS3FullAccess

    3. AmazonEC2FullAccess

    4. AdminAccess

      80 Chapter 5 Securing Your AWS Resources


  13. What will you need to provide for a new IAM user you’re creating who will use “program-
    matic access” to AWS resources?

    1. A password

    2. A password and MFA

    3. An access key ID

    4. An access key ID and secret access key

  14. What will IAM users with AWS Management Console access need to successfully log in?

    1. Their username, account_number, and a password

    2. Their username and password

    3. Their account number and secret access key

    4. Their username, password, and secret access key

  15. Which of the following will encrypt your data while in transit between your office and
    Amazon S3?

    1. DynamoDB

    2. SSE-S3

    3. A client-side master key

    4. SSE-KMS

  16. Which of the following AWS resources cannot be encrypted using KMS?

    1. Existing AWS Elastic Block Store volumes

    2. RDS databases

    3. S3 buckets

    4. DynamoDB databases

  17. What does KMS use to encrypt objects stored on your AWS account?

    1. SSH master key

    2. KMS master key

    3. Client-side master key

    4. Customer master key

  18. Which of the following standards governs AWS-based applications processing credit card
    transactions?

    1. SSE-KMS

    2. FedRAMP

    3. PCI DSS

    4. ARPA

      Review Questions 81


  19. What is the purpose of the Service Organization Controls (SOC) reports found on AWS
    Artifact?

    1. They can be used to help you design secure and reliable credit card transaction
      applications.

    2. They attest to AWS infrastructure compliance with data accountability standards like
      Sarbanes–Oxley.

    3. They guarantee that all AWS-based applications are, by default, compliant with
      Sarbanes–Oxley standards.

    4. They’re an official, ongoing risk-assessment profiler for AWS-based deployments.

  20. What role can the documents provided by AWS Artifact play in your application planning?
    (Select TWO.)

    1. They can help you confirm that your deployment infrastructure is compliant with
      regulatory standards.

    2. They can provide insight into various regulatory and industry standards that represent
      best practices.

    3. They can provide insight into the networking and storage design patterns your AWS
      applications use.

    4. They represent AWS infrastructure design policy.

Review Questions 115


Review Questions

  1. Which of the following credentials can you use to log into the AWS Management Console?

    1. Access key ID

    2. Account alias

    3. Account ID

    4. Identity and Access Management (IAM) username

  2. How long will your session with the AWS Management Console remain active?

    1. 6 hours

    2. 12 hours

    3. 8 hours

    4. 24 hours

    5. 15 minutes

  3. While looking at the EC2 service console in the AWS Management Console while logged in
    as the root user, you notice all of your instances are missing. What could be the reason?

    1. You’ve selected the wrong region in the navigation bar.

    2. You don’t have view access.

    3. You’ve selected the wrong Availability Zone in the navigation bar.

    4. You don’t have an access key.

  4. Which of the following is true regarding a resource tag?

    1. It must be unique within an account.

    2. It’s case insensitive.

    3. It must have a key.

    4. It must have a value.

  5. Which of the following is required to use the AWS Command Line Interface (CLI)?

    1. A secret key

    2. An IAM user

    3. Outbound network access to TCP port 80

    4. Linux

  6. Which of the following are options for installing the AWS CLI on Windows 10?
    (Select TWO.)

    1. The MSI installer

    2. An AWS software development kit (SDK)

    3. The Yum or Aptitude package manager

    4. Using Python and pip

      116 Chapter 6 Working with Your AWS Resources


  7. After installing the AWS Command Line Interface, what should you do before using it to
    securely manage your AWS resources?

    1. Issue the aws --version command.

    2. Issue the aws configure command.

    3. Reboot.

    4. Generate a new access key ID and secret access key for the root user.

  8. Which output format does the AWS CLI support?

    1. Tab-separated values (TSV)

    2. Comma-separated values (CSV)

    3. JavaScript object notation (JSON)

    4. None of these

  9. Which of the following programming languages are AWS software development kits avail-
    able for? (Select THREE.)

    1. Fortran

    2. JavaScript

    3. JSON

    4. Java

    5. PHP

  10. Which of the following software development kits (SDKs) enable developers to write mobile
    applications that run on both Apple and Android devices? (Select TWO.)

    1. AWS Mobile SDK for Unity

    2. AWS Mobile SDK for .NET and Xamarin

    3. AWS SDK for Go

    4. AWS Mobile SDK for iOS

  11. Which of the following programming languages are AWS Internet of Things (IoT) device
    software development kits available for? (Select TWO.)

    1. JavaScript

    2. C++

    3. Swift

    4. Ruby

  12. What’s the difference between the AWS Command Line Interface (CLI) and the AWS soft-
    ware development kits (SDK)? (Select TWO.)

    1. The AWS SDKs allow you to use popular programming languages to write applications
      that interact with AWS services.

    2. The AWS CLI allows you to interact with AWS services from a terminal.

    3. The AWS SDKs allow you to interact with AWS services from a terminal.

    4. The AWS CLI allows you to use popular programming languages to write applications
      that interact with AWS services.

      Review Questions 117


  13. Which of the following CloudWatch features store performance data from AWS services?

    1. Logs

    2. Metrics

    3. Events

    4. Metric filters

    5. Alarms

  14. For which of the following scenarios can you create a CloudWatch alarm to send a
    notification?

    1. A metric that doesn’t change for 24 hours

    2. Termination of an EC2 instance

    3. The presence of a specific IP address in a web server log

    4. A metric that exceeds a given threshold

  15. Which of the following Simple Notification Service (SNS) protocols can you use to send a
    notification? (Select TWO.)

    1. Short Message Service (SMS) text message

    2. CloudWatch Events

    3. Simple Queue Service (SQS)

    4. Mobile pull notification

  16. Which of the following are true regarding CloudWatch Events? (Select TWO.)

    1. It can reboot an EC2 instance when an error appears in a log file.

    2. It can send an SNS notification when an EC2 instance’s CPU utilization exceeds 90%.

    3. It can send an SNS notification when an IAM user logs in to the AWS Management
      Console.

    4. It can shut down an EC2 instance at a specific time.

  17. Which of the following trigger an API action? (Select TWO.)

    1. Configuring the AWS Command Line Interface (CLI)

    2. Viewing an S3 bucket from the AWS Management Console

    3. Logging into the AWS Management Console

    4. Listing IAM users from the AWS Command Line Interface (CLI)

  18. What’s the most cost-effective way to view and search only the last 60 days of management
    API events on your AWS account?

    1. Use CloudTrail event history.

    2. Create a trail.

    3. Stream CloudTrail logs to CloudWatch.

    4. Use CloudWatch Events.

      118 Chapter 6 Working with Your AWS Resources


  19. You want to log every object downloaded from an S3 bucket in a specific region. You want
    to retain these logs indefinitely and search them easily. What’s the most cost-effective way
    to do this? (Select TWO.)

    1. Stream CloudTrail logs to CloudWatch Logs.

    2. Use CloudTrail event history.

    3. Enable CloudTrail logging of global service events.

    4. Create a trail to log S3 data events.

  20. What is a benefit of using CloudTrail log file integrity validation?

    1. It lets you assert that no CloudTrail log files have been deleted from CloudWatch.

    2. It lets you assert that no CloudTrail log files have been deleted from S3.

    3. It prevents unauthorized users from deleting CloudTrail log files.

    4. It tells you how a CloudTrail log file has been tampered with.

  21. Which of the following Cost Explorer report types can show you the monthly costs for your
    reserved EC2 instances?

    1. Reserved instance recommendations

    2. Reserved Instances (RI) Coverage reports

    3. Reserved Instances (RI) Utilization reports

    4. Costs and usage reports

  22. Which of the following services allow you to purchase reserved instances to save money?

    1. Amazon Relational Database Service (RDS)

    2. Lambda

    3. S3

    4. AWS Fargate

  23. Which Cost Explorer report shows the amount of money you’ve saved using reserved
    instances?

    1. Daily costs

    2. Reservation Utilization

    3. Reservation Coverage

    4. Monthly EC2 running hours costs and usage

  24. You’ve been running several Elasticsearch instances continuously for the past three months.
    You check the reserved instance recommendations in Cost Explorer but see no recommen-
    dations. What could be a reason for this?

    1. The recommendation parameters are based on the past seven days.

    2. You haven’t selected the Elastic Compute Cloud (EC2) service.

    3. Cost Explorer doesn’t make reservation recommendations for Elasticsearch.

    4. Your instances are already covered by reservations.

    5. You haven’t selected the ElastiCache service.

132 Chapter 7 The Core Compute Services


Review Questions

  1. What is the function of an EC2 AMI?

    1. To define the hardware profile used by an EC2 instance

    2. To serve as an instance storage volume for high-volume data processing operations

    3. To serve as a source image from which an instance’s primary storage volume is built

    4. To define the way data streams are managed by EC2 instances

  2. Where can you find a wide range of verified AMIs from both AWS and third-party vendors?

    1. AWS Marketplace

    2. Quick Start

    3. Community AMIs

    4. My AMIs

  3. Which of the following could be included in an EC2 AMI? (Select TWO.)

    1. A networking configuration

    2. A software application stack

    3. An operating system

    4. An instance type definition

  4. Which of the following are EC2 instance type families? (Select TWO.)

    1. c5d.18xlarge

    2. Compute optimized

    3. t2.micro

    4. Accelerated computing

  5. When describing EC2 instance types, what is the role played by the vCPU metric?

    1. vCPUs represent an instance’s potential resilience against external network demands.

    2. vCPUs represent an instance type’s system memory compared to the class of memory
      modules on a physical machine.

    3. vCPUs represent an AMI’s processing power compared to the number of processors on
      a physical machine.

    4. vCPUs represent an instance type’s compute power compared to the number of
      processors on a physical machine.

  6. Which of the following describes an EC2 dedicated instance?

    1. An EC2 instance running on a physical host reserved for the exclusive use of a single
      AWS account

    2. An EC2 instance running on a physical host reserved for and controlled by a single
      AWS account

    3. An EC2 AMI that can be launched only on an instance within a single AWS account

    4. An EC2 instance optimized for a particular compute role

      Review Questions 133


  7. Which of the following describes an EBS volume?

    1. A software stack archive packaged to make it easy to copy and deploy to an EC2
      instance

    2. A virtualized partition of a physical storage drive that’s directly connected to the EC2
      instance it’s associated with

    3. A virtualized partition of a physical storage drive that’s not directly connected to the
      EC2 instance it’s associated with

    4. A storage volume that’s encrypted for greater security

  8. Why might you want to use an instance store volume with your EC2 instance rather than a
    volume from the more common EBS service? (Select TWO.)

    1. Instance store volumes can be encrypted.

    2. Instance store volumes, data will survive an instance shutdown.

    3. Instance store volumes provide faster data read/write performance.

    4. Instance store volumes are connected directly to your EC2 instance.

  9. Your web application experiences periodic spikes in demand that require the provisioning
    of extra instances. Which of the following pricing models would make the most sense for
    those extra instances?

    1. Spot

    2. On-demand

    3. Reserved

    4. Dedicated

  10. Your web application experiences periodic spikes in demand that require the provisioning
    of extra instances. Which of the following pricing models would make the most sense for
    the “base” instances that will run constantly?

    1. Spot

    2. On-demand

    3. Spot fleet

    4. Reserved

  11. Which of the following best describes what happens when you purchase an EC2 reserved
    instance?

    1. Charges for any instances you run matching the reserved instance type will be covered
      by the reservation.

    2. Capacity matching the reserved definition will be guaranteed to be available whenever
      you request it.

    3. Your account will immediately and automatically be billed for the full reservation
      amount.

    4. An EC2 instance matching your reservation will automatically be launched in the
      selected AWS Region.

      134 Chapter 7 The Core Compute Services


  12. Which of the following use cases are good candidates for spot instances? (Select TWO.)

    1. Big data processing workloads

    2. Ecommerce websites

    3. Continuous integration development environments

    4. Long-term, highly available, content-rich websites

  13. Which AWS services simplify the process of bringing web applications to deployment?
    (Select TWO.)

    1. Elastic Block Store

    2. Elastic Compute Cloud

    3. Elastic Beanstalk

    4. Lightsail

  14. Which of the following services bills at a flat rate regardless of how it’s consumed?

    1. Lightsail

    2. Elastic Beanstalk

    3. Elastic Compute Cloud

    4. Relational Database Service

  15. Which of these stacks are available from Lightsail blueprints? (Select TWO.)

    1. Ubuntu

    2. Gitlab

    3. WordPress

    4. LAMP

  16. Which of these AWS services use primarily EC2 resources under the hood? (Select TWO.)

    1. Elastic Block Store

    2. Lightsail

    3. Elastic Beanstalk

    4. Relational Database Service

  17. Which of the following AWS services are designed to let you deploy Docker containers?
    (Select TWO.)

    1. Elastic Container Service

    2. Lightsail

    3. Elastic Beanstalk

    4. Elastic Compute Cloud

      Review Questions 135


  18. Which of the following use container technologies? (Select TWO.)

    1. Docker

    2. Kubernetes

    3. Lambda

    4. Lightsail

  19. What role can the Python programming language play in AWS Lambda?

    1. Python cannot be used for Lambda.

    2. It is the primary language for API calls to administrate Lambda remotely.

    3. It is used as the underlying code driving the service.

    4. It can be set as the runtime environment for a function.

  20. What is the maximum time a Lambda function may run before timing out?

    1. 15 minutes

    2. 5 minutes

    3. 1 minute

    4. 1 hour

152 Chapter 8 The Core Storage Services


Review Questions

  1. When trying to create an S3 bucket named documents, AWS informs you that the bucket
    name is already in use. What should you do in order to create a bucket?

    1. Use a different region.

    2. Use a globally unique bucket name.

    3. Use a different storage class.

    4. Use a longer name.

    5. Use a shorter name.

  2. Which S3 storage classes are most cost-effective for infrequently accessed data that can’t be
    easily replaced? (Select TWO.)

    1. STANDARD_IA

    2. ONEZONE_IA

    3. GLACIER

    4. STANDARD

    5. INTELLIGENT_TIERING

  3. What are the major differences between Simple Storage Service (S3) and Elastic Block Store
    (EBS)? (Select TWO.)

    1. EBS stores volumes.

    2. EBS stores snapshots.

    3. S3 stores volumes.

    4. S3 stores objects.

    5. EBS stores objects.

  4. Which tasks can S3 object life cycle configurations perform automatically? (Select THREE.)

    1. Deleting old object versions

    2. Moving objects to Glacier

    3. Deleting old buckets

    4. Deleting old objects

    5. Moving objects to an EBS volume

  5. What methods can be used to grant anonymous access to an object in S3? (Select TWO.)

    1. Bucket policies

    2. Access control lists

    3. User policies

    4. Security groups

      Review Questions 153


  6. Your budget-conscious organization has a 5 TB database file it needs to retain off-site
    for at least 5 years. In the event the organization needs to access the database, it must be
    accessible within 8 hours. Which cloud storage option should you recommend, and why?
    (Select TWO.)

    1. S3 has the most durable storage.

    2. S3.

    3. S3 Glacier.

    4. Glacier is the most cost effective.

    5. S3 has the fastest retrieval times.

    6. S3 doesn’t support object sizes greater than 4 TB.

  7. Which of the following actions can you perform from the S3 Glacier service console?

    1. Delete an archive

    2. Create a vault

    3. Create an archive

    4. Delete a bucket

    5. Retrieve an archive

  8. Which Glacier retrieval option generally takes 3 to 5 hours to complete?

    1. Provisioned

    2. Expedited

    3. Bulk

    4. Standard

  9. What’s the minimum size for a Glacier archive?

    1. 1 byte

    2. 40 TB

    3. 5 TB

    4. 0 bytes

  10. Which types of AWS Storage Gateway let you connect your servers to block storage using
    the iSCSI protocol? (Select TWO.)

    1. Cached gateway

    2. Tape gateway

    3. File gateway

    4. Volume gateway

  11. Where does AWS Storage Gateway primarily store data?

    1. Glacier vaults

    2. S3 buckets

    3. EBS volumes

    4. EBS snapshots

      154 Chapter 8 The Core Storage Services


  12. You need an easy way to transfer files from a server in your data center to S3 without
    having to install any third-party software. Which of the following services and storage
    protocols could you use? (Select FOUR.)

    1. AWS Storage Gateway—file gateway

    2. iSCSI

    3. AWS Snowball

    4. SMB

    5. AWS Storage Gateway—volume gateway

    6. The AWS CLI

  13. Which of the following are true regarding the AWS Storage Gateway—volume gateway
    configuration? (Select THREE.)

    1. Stored volumes asynchronously back up data to S3 as EBS snapshots.

    2. Stored volumes can be up to 32 TB in size.

    3. Cached volumes locally store only a frequently used subset of data.

    4. Cached volumes asynchronously back up data to S3 as EBS snapshots.

    5. Cached volumes can be up to 32 TB in size.

  14. What’s the most data you can store on a single Snowball device?

    1. 42 TB

    2. 50 TB

    3. 72 TB

    4. 80 TB

  15. Which of the following are security features of AWS Snowball? (Select TWO.)

    1. It enforces encryption at rest.

    2. It uses a Trusted Platform Module (TPM) chip.

    3. It enforces NFS encryption.

    4. It has tamper-resistant network ports.

  16. Which of the following might AWS do after receiving a damaged Snowball device from a
    customer?

    1. Copy the customer’s data to Glacier

    2. Replace the Trusted Platform Module (TPM) chip

    3. Securely erase the customer’s data from the device

    4. Copy the customer’s data to S3

      Review Questions 155


  17. Which of the following can you use to transfer data to AWS Snowball from a Windows
    machine without writing any code?

    1. NFS

    2. The Snowball Client

    3. iSCSI

    4. SMB

    5. The S3 SDK Adapter for Snowball

  18. How do the AWS Snowball and Snowball Edge devices differ? (Select TWO.)

    1. Snowball Edge supports copying files using NFS.

    2. Snowball devices can be clustered together for storage.

    3. Snowball’s QSFP+ network interface supports speeds up to 40 Gbps.

    4. Snowball Edge can run EC2 instances.

  19. Which of the following Snowball Edge device options is the best for running machine
    learning applications?

    1. Compute Optimized

    2. Compute Optimized with GPU

    3. Storage Optimized

    4. Network Optimized

  20. Which of the following hardware devices offers a network interface speed that supports up
    to 100 Gbps?

    1. Snowball Edge with the Storage Optimized configuration

    2. Snowball Edge with the Compute Optimized configuration

    3. Storage Gateway

    4. 80 TB Snowball

Review Questions 171


Review Questions

  1. Which type of database stores data in columns and rows?

    1. Nonrelational

    2. Relational

    3. Key-value store

    4. Document

  2. Which of the following Structured Query Language (SQL) statements can you use to write
    data to a relational database table?

    1. CREATE

    2. INSERT

    3. QUERY

    4. WRITE

  3. Which of the following statements is true regarding nonrelational databases?

    1. You can create only one table.

    2. No primary key is required.

    3. You can’t store data with a fixed structure.

    4. You don’t have to define all the types of data that a table can store before adding data
      to it.

  4. What is a no-SQL database?

    1. A nonrelational database without primary keys

    2. A schemaless relational database

    3. A schemaless nonrelational database

    4. A relational database with primary keys

  5. What do new Relational Database Service (RDS) instances use for database storage?

    1. Instance volumes

    2. Elastic Block Store (EBS) volumes

    3. Snapshots

    4. Magnetic storage

  6. Which of the following are database engine options for Amazon Relational Database
    Service (RDS)? (Select TWO.)

    1. IBM dBase

    2. PostgreSQL

    3. DynamoDB

    4. Amazon Aurora

    5. Redis

      172 Chapter 9 The Core Database Services


  7. What two databases is Amazon Aurora compatible with? (Select TWO.)

    1. MySQL

    2. PostgreSQL

    3. MariaDB

    4. Oracle

    5. Microsoft SQL Server

  8. Which of the following features of Relational Database Service (RDS) can prevent data loss
    in the event of an Availability Zone failure? (Select TWO.)

    1. Read replicas

    2. Multi-AZ

    3. Snapshots

    4. IOPS

    5. Vertical scaling

  9. Which RDS database engine offers automatically expanding database storage up to 64 TB?

    1. Microsoft SQL Server

    2. Amazon Aurora

    3. Oracle

    4. Amazon Athena

  10. Which of the following Relational Database Service (RDS) features can help you achieve a
    monthly availability of 99.95 percent?

    1. Multi-AZ

    2. Read replicas

    3. Point-in-time recovery

    4. Horizontal scaling

  11. What is true regarding a DynamoDB partition? (Select TWO.)

    1. It’s stored within a table.

    2. It’s backed by solid-state drives.

    3. It’s a way to uniquely identify an item in a table.

    4. It’s replicated across multiple Availability Zones.

  12. What is the minimum monthly availability for DynamoDB in a single region?

    1. 99.99 percent

    2. 99.95 percent

    3. 99.9 percent

    4. 99.0 percent

      Review Questions 173


  13. Which of the following statements is true regarding a DynamoDB table?

    1. It can store only one data type.

    2. When you create a table, you must define the maximum number of items that it can
      store.

    3. Items in a table can have duplicate values for the primary key.

    4. Items in a table don’t have to have all the same attributes.

  14. Which configuration parameters can you adjust to improve write performance against a
    DynamoDB table? (Select TWO.)

    1. Decrease read capacity units (RCU)

    2. Increase read capacity units

    3. Increase write capacity units (WCU)

    4. Decrease write capacity units

    5. Enable DynamoDB Auto Scaling

  15. Which DynamoDB operation is the most read-intensive?

    1. Write

    2. Query

    3. Scan

    4. Update

  16. Which of the following would be appropriate to use for a primary key in a DynamoDB
    table that stores a customer list?

    1. The customer’s full name

    2. The customer’s phone number

    3. The customer’s city

    4. A randomly generated customer ID number

  17. Which type of Redshift node uses magnetic storage?

    1. Cost-optimized

    2. Dense compute

    3. Dense storage

    4. Dense memory

  18. Which Redshift feature can analyze structured data stored in S3?

    1. Redshift Spectrum

    2. Redshift S3

    3. Amazon Athena

    4. Amazon RDS

      174 Chapter 9 The Core Database Services


  19. What is the term for a relational database that stores large amounts of structured data from
    a variety of sources for reporting and analysis?

    1. Data storehouse

    2. Data warehouse

    3. Report cluster

    4. Dense storage node

  20. What’s the maximum amount of data you can store in a Redshift cluster when using dense
    storage nodes?

    1. 2 PB

    2. 326 TB

    3. 2 TB

    4. 326 PB

    5. 236 TB

Review Questions 185


Review Questions

  1. Which of the following are true of a default VPC? (Select TWO.)

    1. A default VPC spans multiple regions.

    2. AWS creates a default VPC in each region.

    3. AWS creates a default VPC in each Availability Zone.

    4. By default, each default VPC is available to one AWS account.

  2. Which of the following is a valid CIDR for a VPC or subnet?

    A. 10.0.0.0/28

    B. 10.0.0.0/29

    C. 10.0.0.0/8

    D. 10.0.0.0/15

  3. Which of the following are true regarding subnets? (Select TWO.)

    1. A VPC must have at least two subnets.

    2. A subnet must have a CIDR that’s a subset of the CIDR of the VPC in which it resides.

    3. A subnet spans one Availability Zone.

    4. A subnet spans multiple Availability Zones.

  4. Which of the following is true of a new security group?

    1. It contains an inbound rule denying access from public IP addresses.

    2. It contains an outbound rule denying access to public IP addresses.

    3. It contains an outbound rule allowing access to any IP address.

    4. It contains an inbound rule allowing access from any IP address.

    5. It contains an inbound rule denying access from any IP address.

  5. What’s the difference between a security group and a network access control list (NACL)?
    (Select TWO.)

    1. A network access control list operates at the instance level.

    2. A security group operates at the instance level.

    3. A security group operates at the subnet level.

    4. A network access control list operates at the subnet level.

  6. Which of the following is true of a VPC peering connection?

    1. It’s a private connection that connects more than three VPCs.

    2. It’s a private connection between two VPCs.

    3. It’s a public connection between two VPCs.

    4. It’s a virtual private network (VPN) connection between two VPCs.

      186 Chapter 10 The Core Networking Services


  7. What are two differences between a virtual private network (VPN) connection and a Direct
    Connect connection? (Select TWO.)

    1. A Direct Connect connection offers predictable latency because it doesn’t traverse the
      internet.

    2. A VPN connection uses the internet for transport.

    3. A Direct Connect connection uses AES 128- or 256-bit encryption.

    4. A VPN connection requires proprietary hardware.

  8. Which of the following are true about registering a domain name with Route 53? (Select
    TWO.)

    1. The registrar you use to register a domain name determines who will host DNS for
      that domain.

    2. You can register a domain name for a term of up to 10 years.

    3. Route 53 creates a private hosted zone for the domain.

    4. Route 53 creates a public hosted zone for the domain.

  9. Which of the following Route 53 routing policies can return set of randomly ordered
    values?

    1. Simple

    2. Multivalue Answer

    3. Failover

    4. Latency

  10. Which of the following Route 53 routing policies doesn’t use health checks?

    1. Latency

    2. Multivalue Answer

    3. Simple

    4. Geolocation

  11. Which of the following types of Route 53 health checks works by making a test connection
    to a TCP port?

    1. Simple

    2. CloudWatch alarm

    3. Endpoint

    4. Calculated

  12. You have two EC2 instances hosting a web application. You want to distribute 20 percent
    of traffic to one instance and 80 percent to the other. Which of the following Route 53
    routing policies should you use?

    1. Weighted

    2. Failover

    3. Multivalue Answer

    4. Simple

      Review Questions 187


  13. Resources in a VPC need to be able to resolve internal IP addresses for other resources in
    the VPC. No one outside of the VPC should be able to resolve these addresses. Which of the
    following Route 53 resources can help you achieve this?

    1. A public hosted zone

    2. A private hosted zone

    3. Domain name registration

    4. Health checks

  14. You want to provide private name resolution for two VPCs using the domain name

    company.pri. How many private hosted zones do you need to create?

    1. 1

    2. 2

    3. 3

    4. 4

  15. On how many continents are CloudFront edge locations distributed?

    1. 7

    2. 6

    3. 5

    4. 4

  16. From where does CloudFront retrieve content to store for caching?

    1. Regions

    2. Origins

    3. Distributions

    4. Edge locations

  17. Which CloudFront distribution type requires you to provide a media player?

    1. Streaming

    2. RTMP

    3. Web

    4. Edge

  18. You need to deliver content to users in the United States and Canada. Which of the
    following edge location options will be the most cost effective for your CloudFront
    distribution?

    1. United States, Canada, and Europe

    2. United States, Canada, Europe, and Asia

    3. United States, Canada, Europe, Asia, and Africa

    4. All edge locations

      188 Chapter 10 The Core Networking Services


  19. Approximately how many different CloudFront edge locations are there?

    1. About 50

    2. More than 150

    3. More than 300

    4. More than 500

  20. Which of the following are valid origins for a CloudFront distribution? (Select TWO.)

    1. EC2 instance

    2. A public S3 bucket

    3. A private S3 bucket that you don’t have access to

    4. A private S3 bucket that you own

Review Questions 205


Review Questions

  1. Which of the following is an advantage of using CloudFormation?

    1. It uses the popular Python programming language.

    2. It prevents unauthorized manual changes to resources.

    3. It lets you create multiple separate AWS environments using a single template.

    4. It can create resources outside of AWS.

  2. What formats do CloudFormation templates support? (Select TWO.)

    1. XML

    2. YAML

    3. HTML

    4. JSON

  3. What’s an advantage of using parameters in a CloudFormation template?

    1. Allow customizing a stack without changing the template.

    2. Prevent unauthorized users from using a template.

    3. Prevent stack updates.

    4. Allow multiple stacks to be created from the same template.

  4. Why would you use CloudFormation to automatically create resources for a development
    environment instead of creating them using AWS CLI commands? (Select TWO.)

    1. Resources CloudFormation creates are organized into stacks and can be managed as a
      single unit.

    2. CloudFormation stack updates help ensure that changes to one resource won’t break
      another.

    3. Resources created by CloudFormation always work as expected.

    4. CloudFormation can provision resources faster than the AWS CLI.

  5. What are two features of CodeCommit? (Select TWO.)

    1. Versioning

    2. Automatic deployment

    3. Differencing

    4. Manual deployment

  6. In the context of CodeCommit, what can differencing accomplish?

    1. Allowing reverting to an older version of a file

    2. Understanding what code change introduced a bug

    3. Deleting duplicate lines of code

    4. Seeing when an application was last deployed

      206 Chapter 11 Automating Your AWS Workloads


  7. What software development practice regularly tests new code for bugs but doesn’t do any-
    thing else?

    1. Differencing

    2. Continuous deployment

    3. Continuous delivery

    4. Continuous integration

  8. Which CodeBuild build environment compute types support Windows operating systems?
    (Select TWO.)

    1. build.general2.large

    2. build.general1.medium

    3. build.general1.small

    4. build.general1.large

    5. build.windows1.small

  9. What does a CodeBuild environment always contain? (Select TWO.)

    1. An operating system

    2. A Docker image

    3. The Python programming language

    4. .NET Core

    5. The PHP programming language

  10. Which of the following can CodeDeploy do? (Select THREE.)

    1. Deploy an application to an on-premises Windows instance.

    2. Deploy a Docker container to the Elastic Container Service.

    3. Upgrade an application on an EC2 instance running Red Hat Enterprise Linux.

    4. Deploy an application to an Android smartphone.

    5. Deploy a website to an S3 bucket.

  11. What is the minimum number of actions in a CodePipeline pipeline?

    1. 1

    2. 2

    3. 3

    4. 4

    5. 0

  12. You want to predefine the configuration of EC2 instances that you plan to launch manually
    and using Auto Scaling. What resource must you use?

    1. CloudFormation template

    2. Instance role

    3. Launch configuration

    4. Launch template

      Review Questions 207


  13. What Auto Scaling group parameters set the limit for the number of instances that Auto
    Scaling creates? (Select TWO.)

    1. Maximum

    2. Group size

    3. Desired capacity

    4. Minimum

  14. An Auto Scaling group has a desired capacity of 7 and a maximum size of 7. What will
    Auto Scaling do if someone manually terminates one of these instances?

    1. It will not launch any new instances.

    2. It will launch one new instance.

    3. It will terminate one instance.

    4. It will change the desired capacity to 6.

  15. What Auto Scaling feature creates a scaling schedule based on past usage patterns?

    1. Predictive scaling

    2. Scheduled scaling

    3. Dynamic scaling

    4. Pattern scaling

  16. What type of AWS Systems Manager document can run Bash or PowerShell scripts on an
    EC2 instance?

    1. Run document

    2. Command document

    3. Automation document

    4. Script document

  17. What type of AWS Systems Manager document can take a snapshot of an EC2 instance?

    1. Command document

    2. Run document

    3. Script document

    4. Automation document

  18. Which of the following OpsWorks services uses Chef recipes?

    1. AWS OpsWorks for Puppet Enterprise

    2. AWS OpsWorks Stacks

    3. AWS OpsWorks Layers

    4. AWS OpsWorks for Automation

      208 Chapter 11 Automating Your AWS Workloads


  19. What configuration management platforms does OpsWorks support? (Select TWO.)

    1. SaltStack

    2. Puppet Enterprise

    3. CFEngine

    4. Chef

    5. Ansible

  20. Which of the following OpsWorks Stacks layers contains at least one EC2 instance?

    1. EC2 Auto Scaling layer

    2. Elastic Container Service (ECS) cluster layer

    3. OpsWorks layer

    4. Relational Database Service (RDS) layer

    5. Elastic Load Balancing (ELB) layer

226 Chapter 12 Common Use-Case Scenarios


Review Questions

  1. Which of the following is not one of the pillars of the Well-Architected Framework?

    1. Performance efficiency

    2. Reliability

    3. Resiliency

    4. Security

    5. Cost optimization

  2. Which of the following are examples of applying the principles of the security pillar of the
    Well-Architected Framework? (Select TWO.)

    1. Granting each AWS user their own IAM username and password

    2. Creating a security group rule to deny access to unused ports

    3. Deleting an empty S3 bucket

    4. Enabling S3 versioning

  3. You’re hosting a web application on two EC2 instances in an Auto Scaling group. The
    performance of the application is consistently acceptable. Which of the following can help
    maintain or improve performance efficiency? (Select TWO.)

    1. Monitoring for unauthorized access

    2. Doubling the number of instances in the Auto Scaling group

    3. Implementing policies to prevent the accidental termination of EC2 instances in the
      same Auto Scaling group

    4. Using CloudFront

  4. Which of the following can help achieve cost optimization? (Select TWO.)

    1. Deleting unused S3 objects

    2. Deleting empty S3 buckets

    3. Deleting unused application load balancers

    4. Deleting unused VPCs

  5. Which of the following is a key component of operational excellence?

    1. Adding more security personnel

    2. Automating manual processes

    3. Making minor improvements to bad processes

    4. Making people work longer hours

      Review Questions 227


  6. Your default VPC in the us-west-1 Region has three default subnets. How many Availability
    Zones are in this Region?

    1. 2

    2. 3

    3. 4

    4. 5

  7. Your organization is building a database-backed web application that will sit behind an
    application load balancer. You add an inbound security group rule to allow HTTP traffic
    on TCP port 80. Where should you apply this security group to allow users to access the
    application?

    1. The application load balancer listener

    2. The database instance

    3. The subnets where the instances reside

    4. None of these

  8. How does an application load balancer enable reliability?

    1. By routing traffic away from failed instances

    2. By replacing failed instances

    3. By routing traffic to the least busy instances

    4. By caching frequently accessed content

  9. Which of the following contains the configuration information for instances in an Auto
    Scaling group?

    1. Launch directive

    2. Dynamic scaling policy

    3. CloudFormation template

    4. Launch template

  10. You’ve created a target tracking policy for an Auto Scaling group. You want to ensure that
    the number of instances in the group never exceeds 5. How can you accomplish this?

    1. Set the group size to 5.

    2. Set the maximum group size to 5.

    3. Set the minimum group size to 5.

    4. Delete the target tracking policy.

  11. Which of the following is an example of a static website?

    1. A WordPress blog

    2. A website hosted on S3

    3. A popular social media website

    4. A web-based email application

      228 Chapter 12 Common Use-Case Scenarios


  12. Which of the following features of S3 improve the security of data you store in an S3
    bucket? (Select TWO.)

    1. Objects in S3 are not public by default.

    2. All objects are readable by all AWS users by default.

    3. By default, S3 removes ACLs that allow public read access to objects.

    4. S3 removes public objects by default.

  13. Which of the following is required to enable S3 static website hosting on a bucket?

    1. Enable bucket hosting in the S3 service console.

    2. Disable default encryption.

    3. Disable object versioning.

    4. Enable object versioning.

    5. Make all objects in the bucket public.

  14. You’ve created a static website hosted on S3 and given potential customers the URL that
    consists of words and numbers. They’re complaining that it’s too hard to type in. How can
    you come up with a friendlier URL?

    1. Re-create the bucket using only words in the name.

    2. Use a custom domain name.

    3. Re-create the bucket in a different Region.

    4. Re-create the bucket using only numbers in the name.

  15. Which of the following is true regarding static websites hosted in S3?

    1. The content served is not encrypted in transit.

    2. Anyone can modify the content.

    3. You must use a custom domain name.

    4. A website hosted on S3 is stored in multiple Regions.

  16. Which of the following can impact the reliability of a web application running on EC2
    instances?

    1. Taking EBS snapshots of the instances.

    2. The user interface is too difficult to use.

    3. Not replacing a misconfigured resource that the application depends on.

    4. Provisioning too many instances.

  17. You have a public web application running on EC2 instances. Which of the following fac-
    tors affecting the performance of your application might be out of your control?

    1. Storage

    2. Compute

    3. Network

    4. Database

      Review Questions 229


  18. An Auto Scaling group can use an EC2 system health check to determine whether an
    instance is healthy. What other type of health check can it use?

    1. S3

    2. SNS

    3. VPC

    4. ELB

  19. You’re hosting a static website on S3. Your web assets are stored under the Standard storage
    class. Which of the following is true regarding your site?

    1. Someone may modify the content of your site without authorization.

    2. You’re responsible for S3 charges.

    3. You’re charged for any compute power used to host the site.

    4. An Availability Zone outage may bring down the site.

  20. You’re hosting a static website on S3. Your web assets are stored in the US East 1 Region in
    the bucket named
    mygreatwebsite. What is the URL of the website?

    1. http://mygreatwebsite.s3-website-us-east-1.amazonaws.com

    2. http://mygreatwebsite.s3.amazonaws.com

    3. http://mygreatwebsite.s3-website-us-east.amazonaws.com

    4. http://mygreatwebsite.s3-us-east-1.amazonaws.com


Answers to Review
Questions


Appendix

A

image

232 Appendix A Answers to Review Questions


Chapter 1: The Cloud

  1. C. Having globally distributed infrastructure and experienced security engineers makes a
    provider’s infrastructure more reliable. Metered pricing makes a wider range of workloads
    possible.

  2. A, D. Security and virtualization are both important characteristics of successful cloud
    workloads, but neither will directly impact availability.

  3. B, D. Security and scalability are important cloud elements but are not related to metered
    pricing.

  4. A, B. Security and elasticity are important but are not directly related to server
    virtualization.

  5. D. A hypervisor is software (not hardware) that administrates virtualized operations.

  6. B. Sharding, aggregating remote resources, and abstracting complex infrastructure
    can all be accomplished using virtualization techniques, but they aren’t, of themselves,
    virtualization.

  7. C. PaaS products mask complexity, SaaS products provide end-user services, and serverless
    architectures (like AWS Lambda) let developers run code on cloud servers.

  8. A. IaaS products provide full infrastructure access, SaaS products provide end-user
    services, and serverless architectures (like AWS Lambda) let developers run code on cloud
    servers.

  9. B. IaaS products provide full infrastructure access, PaaS products mask complexity, and
    serverless architectures (like AWS Lambda) let developers run code on cloud servers.

  10. A. Increasing or decreasing compute resources better describes elasticity. Efficient use of
    virtualized resources and billing models aren’t related directly to scalability.

  11. C. Preconfiguring compute instances before they’re used to scale up an application is an
    element of scalability rather than elasticity. Efficient use of virtualized resources and billing
    models aren’t related directly to elasticity.

  12. A, D. Capitalized assets and geographic reach are important but don’t have a direct impact
    on operational scalability.


Chapter 2: Understanding Your
AWS Account

  1. D. Only the t2.micro instance type is Free Tier–eligible, and any combination of t2.micro
    instances can be run up to a total of 750 hours per month.

    Chapter 2: Understanding Your AWS Account 233


  2. B, C. S3 buckets—while available in such volumes under the Free Tier—are not necessary
    for an EC2 instance. Since the maximum total EBS space allowed by the Free Tier is 30 GB,
    two 20 GB would not be covered.

  3. B, D. The API calls/month and ECR free storage are available only under the Free Tier.

  4. A, B. There is no Top Free Tier Services Dashboard or, for that matter, a Billing Preferences
    Dashboard.

  5. C. Wikipedia pages aren’t updated or detailed enough to be helpful in this respect. The
    AWS CLI isn’t likely to have much (if any) pricing information. The TCO Calculator
    shouldn’t be used for specific and up-to-date information about service pricing.

  6. A. Pricing will normally change based on the volume of service units you consume and,
    often, between AWS Regions.

  7. B. You can, in fact, calculate costs for a multiservice stack. The calculator pricing is kept
    up-to-date. You can specify very detailed configuration parameters.

  8. C, D. Calculate By Month Or Year is not an option, and since the calculator calculates
    only cost by usage, Include Multiple Organizations wouldn’t be a useful option.

  9. A. The calculator covers all significant costs associated with an on-premises deployment
    but doesn’t include local or national tax implications.

  10. D. The currency you choose to use will have little impact on price—it’s all relative, of
    course. The guest OS and region will make a difference, but it’s relatively minor.

  11. B. The correct URL is https://docs.aws.amazon.com/general/latest/gr/
    aws_service_limits.html
    .

  12. A. Resource limits exist only within individual regions; the limits in one region don’t
    impact another. There’s no logistical reason that customers can’t scale up deployments
    at any rate. There are, in fact, no logical limits to the ability of AWS resources to scale
    upward.

  13. D. While most service limits are soft and can be raised on request, there are some service
    limits that are absolute.

  14. D. The Cost Explorer and Cost and Usage Reports pages provide more in-depth and/or
    customized details. Budgets allow you to set alerts based on usage.

  15. C. Reservation budgets track the status of any active reserved instances on your account.
    Cost budgets monitor costs being incurred against your account. There is no budget type
    that correlates usage per unit cost to understand your account cost efficiency.

  16. D. You can configure the period, instance type, and start/stop dates for a budget, but you
    can’t filter by resource owner.

  17. A. Billing events aren’t triggers for alerts. Nothing in this chapter discusses intrusion events.

  18. C. Tags are passive, so they can’t automatically trigger anything. Resource tags—not cost
    allocation tags—are meant to help you understand and control deployments. Tags aren’t
    associated with particular billing periods.

    234 Appendix A Answers to Review Questions


  19. A, C. Companies with multiple users of resources in a single AWS account would not
    benefit from AWS Organizations, nor would a company with completely separated units.
    The value of AWS Organizations is in integrating the administration of related accounts.

  20. B. Budgets are used to set alerts. Reports provide CSV-formatted data for offline
    processing. Consolidated Billing (now migrated to AWS Organizations) is for
    administrating resources across multiple AWS accounts.


Chapter 3: Getting Support on AWS

  1. C. The Basic plan won’t provide any personalized support. The Developer plan is cheaper,
    but there is limited access to support professionals. The Business plan does offer 24/7 email,
    chat, and phone access to an engineer, so until you actually deploy, this will make the most
    sense. At a $15,000 monthly minimum, the Enterprise plan won’t be cost effective.

  2. B. Using the public documentation available through the Basic plan won’t be enough to
    address your specific needs. The Business and Enterprise plans are not necessary as you
    don’t yet have production deployments.

  3. D. The lower three support tiers provide limited access to only lower-level support
    professionals, while the Enterprise plan provides full access to senior engineers and
    dedicates a technical account manager (TAM) as your resource for all your AWS needs.

  4. C. Basic plan customers are given customer support access only for account management
    issues and not for technical support or security breaches.

  5. B. The TAM is available only for Enterprise Support customers. The primary function is
    one of guidance and advocacy.

  6. B. Only the Business and Enterprise plans include help with troubleshooting
    interoperability between AWS resources and third-party software and operating systems.
    The Business plan is the least expensive that will get you this level of support.

  7. A. The Developer plan costs the greater of $29 or 3 percent of the monthly usage. In this
    case, 3 percent of the month’s usage is $120.

  8. D. The Business plan—when monthly consumption falls between $10,000 and $80,000—
    costs the greater of $100 or 7 percent of the monthly usage. In this case, 7 percent of a
    single month’s usage ($11,000) is $770. The three month total would, therefore, be $2,310.

  9. C. The AWS Professional Services site includes tech talk webinars, white papers, and blog
    posts. The Basic Support plan includes AWS documentation resources. The Knowledge
    Center consists of FAQ documentation.

  10. A. The TAM is an AWS employee dedicated to guiding your developer and admin teams.
    There is no such thing as a network appliance for workload testing.

  11. B, C. Although DOC and DocBook are both popular and useful formats, neither is used by
    AWS for its documentation.

    Chapter 4: Understanding the AWS Environment 235


  12. A, C. The compare-plans page provides general information about support plans, and the
    professional-services site describes accessing that particular resource. Neither directly
    includes technical guides.

  13. D. The Knowledge Center is a FAQ for technical problems and their solutions. The main
    documentation site is much better suited to introduction-level guides. The
    https://
    forums.aws.amazon.com
    site is the discussion forum for AWS users.

  14. B. The Knowledge Center is a general FAQ for technical problems and their solutions.
    The
    docs.aws.amazon.com site is for general documentation. There is no https://aws

    .amazon.com/security/encryption page.

  15. A. Version numbers are not publicly available, and the word Current isn’t used in this context.

  16. C. Replication is, effectively, a subset of Fault Tolerance and therefore would not require its
    own category.

  17. A. Performance identifies configuration settings that might be blocking performance
    improvements. Security identifies any failures to use security best-practice configurations.
    Cost Optimization identifies any resources that are running and unnecessarily costing
    you money.

  18. B. Performance identifies configuration settings that might be blocking performance
    improvements. Service Limits identifies resource usage that’s approaching AWS Region or
    service limits. There is no Replication category.

  19. A. An OK status for a failed state is a false negative. There is no single status icon
    indicating that your account is completely compliant in Trusted Advisor.

  20. B, D. Both the MFA and Service Limits checks are available for all accounts.


Chapter 4: Understanding the
AWS Environment

  1. B. The letter (a, b) at the end of a designation indicates an Availability Zone. us-east-1
    would never be used for a Region in the western part of the United States.

  2. D. The AWS GovCloud Region is restricted to authorized customers only. Asia Pacific
    (Tokyo) is a normal Region. AWS Admin and US-DOD don’t exist (as far as we know, at
    any rate).

  3. D. EC2 instances will automatically launch into the Region you currently have selected.
    You can manually select the subnet that’s associated with a particular Availability Zone for
    your new EC2 instance, but there’s no default choice.

  4. B, D. Relational Database Service (RDS) and EC2 both use resources that can exist in only
    one Region. Route 53 and CloudFront are truly global services in that they’re not located in
    or restricted to any single AWS Region.

    236 Appendix A Answers to Review Questions


  5. C. The correct syntax for an endpoint is <service-designation>.<region-
    designation>
    .amazonaws.com—meaning, in this case, rds.us-east-1.amazonaws.com.

  6. B, C. For most uses, distributing your application infrastructure between multiple AZs
    within a single Region gives them sufficient fault tolerance. While AWS services do enjoy
    a significant economy of scale—bring prices down—little of that is due to the structure of
    their Regions. Lower latency and compliance are the biggest benefits from this list.

  7. A. Sharing a single resource among Regions wouldn’t cause any particular security,
    networking, or latency problems. It’s a simple matter of finding a single physical host device
    to run on.

  8. B. Auto Scaling is an important working element of application high availability, but it’s
    not what most directly drives it (that’s load balancing). The most effective and efficient
    way to get the job done is through parallel, load-balanced instances in multiple Availability
    Zones, not Regions.

  9. A. “Data centers running uniform host types” would describe an edge location. The data
    centers within a “broad geographic area” would more closely describe an AWS Region. AZs
    aren’t restricted to a single data center.

  10. C. Imposing virtual networking limits on an instance would be the job of a security group
    or access control list. IP address blocks are not assigned at the Region level. Customers have
    no access to or control over AWS networking hardware.

  11. B. AWS displays AZs in (apparently) random order to prevent too many resources from
    being launched in too few zones.

  12. D. Auto Scaling doesn’t focus on any one resource (physical or virtual) because it’s
    interested only in the appropriate availability and quality of the overall
    service. The job of
    orchestration is for load balancers, not autoscalers.

  13. C. Resource isolation can play an important role in security, but not reliability. Automation
    can improve administration processes, but neither it, nor geolocation, is the most effective
    reliability strategy.

  14. A, C. RDS database instances and Lambda functions are not qualified CloudFront origins.
    EC2 load balancers can be used as CloudFront origins.

  15. D. CloudFront can’t protect against spam and, while it can complement your application’s
    existing redundancy and encryption, those aren’t its primary purpose.

  16. B. Countering the threat of DDoS attacks is the job of AWS Shield. Protecting web
    applications from web-based threats is done by AWS Web Application Firewall. Using
    Lambda to customize CloudFront behavior is for Lambda Edge.

  17. A, B. What’s in the cloud is your responsibility—it includes the administration of
    EC2-based operating systems.

  18. C. There’s no one easy answer, as some managed services are pretty much entirely within
    Amazon’s sphere, and others leave lots of responsibility with the customer. Remember, “if
    you can edit it, you own it.”

    Chapter 5: Securing Your AWS Resources 237


  19. D. The AWS Billing Dashboard is focused on your account billing issues. Neither the AWS
    Acceptable Use Monitor nor the Service Status Dashboard actually exists. But nice try.

  20. B. The correct document (and web page https://aws.amazon.com/aup/) for this
    information is the AWS Acceptable Use Policy.


Chapter 5: Securing Your
AWS Resources

  1. A. Identity and Access Management (IAM) is primarily focused on helping you control
    access to your AWS resources. KMS handles access keys. EC2 manages SSH key pairs.
    While IAM does touch on federated management, that’s not its primary purpose.

  2. A, B, D. Including a space or null character is not a password policy option.

  3. C, D. The root user should not be used for day-to-day admin tasks—even as part of an
    “admin” group. The goal is to protect root as much as possible.

  4. D. MFA requires at least two (“multi”) authentication methods. Those will normally
    include a password (something you know) and a token sent to either a virtual or physical
    MFA device (something you have).

  5. B. The -i argument should point to the name (and location) of the key stored on the
    local (client) machine. By default, the admin user on an Amazon Linux instance is named
    ec2-user.

  6. B. While assigning permissions and policy-based roles will work, it’s not nearly as efficient
    as using groups, where you need to set or update permissions only once for multiple users.

  7. C. An IAM role is meant to be assigned to a trusted entity (like another AWS service or a
    federated identity). A “set of permissions” could refer to a policy. A set of IAM users could
    describe a group.

  8. A, D. Federated identities are for permitting authenticated entities access to AWS resources
    and data. They’re not for importing anything from external accounts—neither data nor
    guidance.

  9. C, D. Secure Shell (SSH) is an encrypted remote connectivity protocol, and SSO (single
    sign-on) is an interface feature—neither is a standard for federated identities.

  10. D. The credential report focuses only on your users’ passwords, access keys, and MFA
    status. It doesn’t cover actual activities or general security settings.

  11. B. The credential report is saved to the comma-separated values (spreadsheet) format.

  12. A. Your admin user will need broad access to be effective, so AmazonS3FullAccess and
    AmazonEC2FullAccess—which open up only S3 and EC2, respectively—won’t be enough.
    There is no AdminAccess policy.

    238 Appendix A Answers to Review Questions


  13. D. “Programmatic access” users don’t sign in through the AWS Management Console; they
    access through APIs or the AWS CLI. They would therefore not need passwords or MFA.
    An access key ID alone without a matching secret access key is worthless.

  14. B. When the correct login page (such as https://291976716973.signin.aws.amazon

    .com/console) is loaded, an IAM user only needs to enter a username and a valid password.
    Account numbers and secret access keys are not used for this kind of authentication.

  15. C. In-transit encryption requires that the data be encrypted on the remote client before
    uploading. Server-side encryption (either SSE-S3 or SSE-KMS) only encrypts data within S3
    buckets. DynamoDB is a NoSQL database service.

  16. A. You can only encrypt an EBS volume at creation, not later.

  17. D. A client-side master key is used to encrypt objects before they reach AWS (specifically S3).
    There are no keys commonly known as either SSH or KMS master keys.

  18. C. SSE-KMS are KMS-managed server-side keys. FedRAMP is the U.S. government’s
    Federal Risk and Authorization Management Program (within which transaction
    data protection plays only a relatively minor role). ARPA is the Australian Prudential
    Regulation Authority.

  19. B. SOC isn’t primarily about guidance or risk assessment, and it’s definitely not a
    guarantee of the state of your own deployments. SOC reports are reports of audits
    on AWS
    infrastructure that you can use as part of your own reporting requirements

  20. A, B. AWS Artifact documents are about AWS infrastructure compliance with external
    standards. They tangentially can also provide insight into best practices. They do
    notrepresent internal AWS design or policies.


Chapter 6: Working with Your
AWS Resources

  1. D. You can sign in as the root user or as an IAM user. Although you need to specify the
    account alias or account ID to log in as an IAM user, those are not credentials. You can’t
    log in to the console using an access key ID.

  2. B. Once you’re logged in, your session will remain active for 12 hours. After that, it’ll
    expire and log you out to protect your account.

  3. A. If a resource that should be visible appears to be missing, you may have the wrong
    Region selected. Since you’re logged in as the root, you have view access to all resources
    in your account. You don’t need an access key to use the console. You can’t select an
    Availability Zone in the navigation bar.

  4. C. Each resource tag you create must have a key, but a value is optional. Tags don’t have to
    be unique within an account, and they are case-sensitive.

    Chapter 6: Working with Your AWS Resources 239


  5. A. The AWS CLI requires an access key ID and secret key. You can use those of an IAM
    user or the root user. Outbound network access to TCP port 443 is required, not port 80.
    Linux is also not required, although you can use the AWS CLI with Linux, macOS, or
    Windows. You also can use the AWS Console Mobile Application with Android or iOS
    devices.

  6. A, D. You can use Python and the pip package manager or (with the exception of Windows
    Server 2008) the MSI installer to install the AWS CLI on Windows. AWS SDKs don’t
    include the AWS CLI. Yum and Aptitude are package managers for Linux only.

  7. B. The aws configure command walks you through setting up the AWS CLI to specify
    the default Region you want to use as well as your access key ID and secret key. The

    aws --version command displays the version of the AWS CLI installed, but running this
    command isn’t necessary to use the AWS CLI to manage your resources. Rebooting is also
    not necessary. Using your root user to manage your AWS resources is insecure, so there’s no
    need to generate a new access key ID for your root user.

  8. C. The AWS CLI can display output in JSON, text, or table formats. It doesn’t support
    CSV or TSV.

  9. B, D, E. AWS offers SDKs for JavaScript, Java, and PHP. There are no SDKs for Fortran.
    JSON is a format for representing data, not a programming language.

  10. A, B. The AWS Mobile SDK for Unity and the AWS Mobile SDK for .NET and Xamarin
    let you create mobile applications for both Android and Apple iOS devices. The AWS SDK
    for Go doesn’t enable development of mobile applications for these devices. The AWS
    Mobile SDK for iOS supports development of applications for Apple iOS devices but not
    Android.

  11. A, B. AWS IoT device SDKs are available for C++, Python, Java, JavaScript, and Embedded

    C. There isn’t one available for Ruby or Swift.

  12. A, B. The AWS CLI is a program that runs on Linux, macOS, or Windows and allows you
    to interact with AWS services from a terminal. The AWS SDKs let you use your favorite
    programming language to write applications that interact with AWS services.

  13. B. CloudWatch metrics store performance data from AWS services. Logs store text-based
    logs from applications and AWS services. Events are actions that occur against your AWS
    resources. Alarms monitor metrics. Metric filters extract metric information from logs.

  14. D. A CloudWatch alarm monitors a metric and triggers when that metric exceeds a
    specified threshold. It will not trigger if the metric doesn’t change. Termination of an EC2
    instance is an event, and you can’t create a CloudWatch alarm to trigger based on an event.
    You also can’t create an alarm to trigger based on the presence of an IP address in a web
    server log. But you could create a metric filter to look for a specific IP address in the log and
    increment a custom metric when that IP address appears in the log.

  15. A, C. SNS supports the SMS and SQS protocols for sending notifications. You can’t send a
    notification to a CloudWatch event. There is no such thing as a mobile pull notification.

  16. C, D. CloudWatch Events monitors events that cause changes in your AWS resources as
    well as AWS Management Console sign-in events. In response to an event, CloudWatch

    240 Appendix A Answers to Review Questions


    Events can take an action including sending an SNS notification or rebooting an EC2
    instance. CloudWatch Events can also perform actions on a schedule. It doesn’t monitor
    logs or metrics.

  17. B, D. Viewing an AWS resource triggers an API action regardless of whether it’s done using
    the AWS Management Console or the AWS CLI. Configuring the AWS CLI doesn’t trigger
    any API actions. Logging into the AWS Management Console doesn’t trigger an API action.

  18. A. The CloudTrail event history log stores the last 90 days of management events for each
    Region. Creating a trail is overkill and not as cost-effective since it would involve storing
    logs in an S3 bucket. Streaming CloudTrail logs to CloudWatch would require creating a
    trail. CloudWatch Events doesn’t log management events.

  19. A, D. Creating a trail in the Region where the bucket exists will generate CloudTrail logs,
    which you can then stream to CloudWatch for viewing and searching. CloudTrail event
    history doesn’t log data events. CloudTrail logs global service events by default, but S3 data
    events are not included.

  20. B. Log file integrity validation uses cryptographic hashing to help you assert that no
    CloudTrail log files have been deleted from S3. It doesn’t prevent tampering or deletion and
    can’t tell you how a file has been tampered with. Log file integrity validation has nothing to
    do with CloudWatch.

  21. D. The costs and usage reports show you your monthly spend by service. The reserved
    instances reports and reserved instance recommendations don’t show actual monthly costs.

  22. A. RDS lets you purchase reserved instances to save money. Lambda, S3, and Fargate don’t
    use instances.

  23. B. The reservation utilization report shows how much you have saved using reserved
    instances. The reservation coverage report shows how much you could have potentially
    saved had you purchased reserved instances. The daily costs and monthly EC2 running
    hours costs and usage reports don’t know how much you’ve saved using reserved instances.

  24. D. Cost Explorer will make reservation recommendations for EC2, RDS, ElastiCache,
    Redshift, and Elasticsearch instances. You need to select the service you want it to analyze
    for recommendations. But Cost Explorer will not make recommendations for instances
    that are already covered by reservations. Because your Elasticsearch instances have been
    running continuously for at least the past seven days, that usage would be analyzed.


Chapter 7: The Core Compute Services

  1. C. An instance’s hardware profile is defined by the instance type. High-volume (or low-
    volume) data processing operations and data streams can be handled using any storage
    volume or on any instance (although some may be better optimized than others).

  2. A. The Quick Start includes only the few dozen most popular AMIs. The Community tab
    includes thousands of publicly available AMIs—whether verified or not. The My AMIs tab
    only includes AMIs created from your account.

    Chapter 7: The Core Compute Services 241


  3. B, C. AMIs can be created that provide both a base operating system and a pre-installed
    application. They would not, however, include any networking or hardware profile
    information—those are largely determined by the instance type.

  4. B, D. c5d.18xlarge and t2.micro are the names of EC2 instance types, not instance type
    families.

  5. D. A virtual central processing unit (vCPU) is a metric that roughly measures an instance
    type’s compute power in terms of the number of processors on a physical server. It has
    nothing to do with resilience to high traffic, system memory, or the underlying AMI.

  6. A. An EC2 instance that runs on a physical host reserved for and controlled by a single
    AWS account is called a dedicated host. A dedicated host is not an AMI, nor is it an
    instance type.

  7. C. A virtualized partition of a physical storage drive that is directly connected to the EC2
    instance it’s associated with is known as an instance store volume. A software stack archive
    packaged to make it easy to copy and deploy to an EC2 instance describes an EC2 AMI. It’s
    possible to encrypt EBS volumes, but encryption doesn’t define them.

  8. C, D. Instance store volumes cannot be encrypted, nor will their data survive an instance
    shutdown. Those are features of EBS volumes.

  9. B. Spot instances are unreliable for this sort of usage since they can be shut down
    unexpectedly. Reserved instances make economic sense where they’ll be used 24/7 over long
    stretches of time. “Dedicated” isn’t a pricing model.

  10. D. Reserved instances will work here because your “base” instances will need to run 24/7
    over the long term. Spot and spot fleet instances are unreliable for this sort of usage since
    they can be shut down unexpectedly. On-demand instances will incur unnecessarily high
    costs over such a long period.

  11. A. There’s no real need for guaranteed available capacity since it’s extremely rare for AWS
    to run out. You choose how you’ll pay for a reserved instance. All Upfront, Partial Upfront,
    and No Upfront are available options, and there is no automatic billing. An instance would
    never be launched automatically in this context.

  12. A, C. Because spot instances can be shut down, they’re not recommended for applications
    that provide any kind of always-on service.

  13. C, D. Elastic Block Store provides storage volumes for Lightsail and Beanstalk (and for
    EC2, for that matter). Elastic Compute Cloud (EC2) provides application deployment, but
    no one ever accused it of being simple.

  14. A. Beanstalk, EC2 (non-reserved instances), and RDS all bill according to actual usage.

  15. B, D. Ubuntu is an OS, not a stack. WordPress is an application, not an OS.

  16. B, C. Elastic Block Store is, for practical purposes, an EC2 resource. RDS is largely built
    on its own infrastructure.

  17. A, C. While you could, in theory at least, manually install Docker Engine on either a
    Lightsail or EC2 instance, that’s not their primary function.

    242 Appendix A Answers to Review Questions


  18. A, B. Both Lambda and Lightsail are compute services that—while they might possibly
    make use of containers under the hood—are not themselves container technologies.

  19. D. Python is, indeed, a valid choice for a function’s runtime environment. There is no one
    “primary” language for Lambda API calls.

  20. A. While the maximum time was, at one point, 5 minutes, that’s been changed to 15.


Chapter 8: The Core Storage Services

  1. B. Bucket names must be globally unique across AWS, irrespective of Region. The length
    of the bucket name isn’t an issue since it’s between 3 and 63 characters long. Storage classes
    are configured on a per-object basis and have no impact on bucket naming.

  2. A, C. STANDARD_IA and GLACIER storage classes offer the highest levels of
    redundancy and are replicated across at least three Availability Zones. Due to their low
    level of availability (99.9 and 99.5 percent, respectively), they’re the most cost-effective
    for infrequently accessed data. ONEZONE_IA stores objects in only one Availability
    Zone, so the loss of that zone could result in the loss of all objects. The STANDARD and
    INTELLIGENT_TIERING classes provide the highest levels of durability and cross-zone
    replication but are also the least cost-effective for this use case.

  3. A, D. S3 is an object storage service, while EBS is a block storage service that stores volumes.
    EBS snapshots are stored in S3. S3 doesn’t store volumes, and EBS doesn’t store objects.

  4. A, B, D. Object life cycle configurations can perform transition or expiration actions based on
    an object’s age. Transition actions can move objects between storage classes, such as between
    STANDARD and GLACIER. Expiration actions can delete objects and object versions. Object
    life cycle configurations can’t delete buckets or move objects to an EBS volume.

  5. A, B. You can use bucket policies or access control lists (ACLs) to grant anonymous users
    access to an object in S3. You can’t use user policies to do this, although you can use them
    to grant IAM principals access to objects. Security groups control access to resources in a
    virtual private cloud (VPC) and aren’t used to control access to objects in S3.

  6. C, D. Both S3 and Glacier are designed for durable, long-term storage and offer the same
    level of durability. Data stored in Glacier can be reliably retrieved within eight hours using the
    Expedited or Standard retrieval options. Data stored in S3 can be retrieved even faster than

    Glacier. S3 can store objects up to 5 TB in size, and Glacier can store archives up to 40 TB. Both
    S3 or Glacier will meet the given requirements, but Glacier is the more cost-effective solution.

  7. B. You can create or delete vaults from the Glacier service console. You can’t upload,
    download, or delete archives. To perform archive actions, you must use the AWS Command
    Line Interface, an AWS SDK, or a third-party program. Glacier doesn’t use buckets.

  8. D. The Standard retrieval option typically takes 3 to 5 hours to complete. Expedited
    takes 1 to 5 minutes, and Bulk takes 5 to 12 hours. There is no Provisioned retrieval
    option, but you can purchase provisioned capacity to ensure Expedited retrievals
    complete in a timely manner.

    Chapter 8: The Core Storage Services 243


  9. A. A Glacier archive can be as small as 1 byte and as large as 40 TB. You can’t have a zero-
    byte archive.

  10. B, D. The tape gateway and volume gateway types let you connect to iSCSI storage. The file
    gateway supports NFS. There’s no such thing as a cached gateway.

  11. B. All AWS Storage Gateway types—file, volume, and tape gateways—primarily store data
    in S3 buckets. From there, data can be stored in Glacier or EBS snapshots, which can be
    instantiated as EBS volumes.

  12. A, B, D, E. The AWS Storage Gateway allows transferring files from on-premises servers
    to S3 using industry-standard storage protocols. The AWS Storage Gateway functioning as
    a file gateway supports the SMB and NFS protocols. As a volume gateway, it supports the
    iSCSI protocol. AWS Snowball and the AWS CLI also provide ways to transfer data to S3,
    but using them requires installing third-party software.

  13. A, C, E. The volume gateway type offers two configurations: stored volumes and cached
    volumes. Stored volumes store all data locally and asynchronously back up that data to S3
    as EBS snapshots. Stored volumes can be up to 16 TB in size. In contrast, cached volumes
    locally store only a frequently used subset of data but do not asynchronously back up the
    data to S3 as EBS snapshots. Cached volumes can be up to 32 TB in size.

  14. C. The 80 TB Snowball device offers 72 TB of usable storage and is the largest available.
    The 50 TB Snowball offers 42 TB of usable space.

  15. A, B. AWS Snowball enforces encryption at rest and in transit. It also uses a TPM chip
    to detect unauthorized changes to the hardware or software. Snowball doesn’t use NFS
    encryption, and it doesn’t have tamper-resistant network ports.

  16. C. If AWS detects any signs of tampering or damage, it will not replace the TPM chip or
    transfer customer data from the device. Instead, AWS will securely erase it.

  17. B. The Snowball Client lets you transfer files to or from a Snowball using a machine
    running Windows, Linux, or macOS. It requires no coding knowledge, but the S3 SDK
    Adapter for Snowball does. Snowball doesn’t support the NFS, iSCSI, or SMB storage
    protocols.

  18. A, D. Snowball Edge offers compute power to run EC2 instances and supports copying files
    using the NFSv3 and NFSv4 protocols. Snowball devices can’t be clustered and don’t have a
    QFSP+ port.

  19. B. The Snowball Edge—Compute Optimized with GPU option is optimized for machine
    learning and high-performance computing applications. Although the Compute Optimized
    and Storage Optimized options could work, they aren’t the best choices. There’s no
    Network Optimized option.

  20. B. Snowball Edge with the Compute Optimized configuration includes a QSFP+ network
    interface that supports up to 100 Gbps. The Storage Optimized configuration has a QSFP+
    port that supports only up to 40 Gbps. The 80 TB Snowball supports only up to 10 Gbps.
    A storage gateway is a virtual machine, not a hardware device.

244 Appendix A Answers to Review Questions


Chapter 9: The Core Database Services

  1. B. A relational database stores data in columns called attributes and rows called records.
    Nonrelational databases—including key-value stores and document stores—store data in
    collections or items but don’t use columns or rows.

  2. B. The SQL INSERT statement can be used to add data to a relational database. The QUERY
    command is used to read data. CREATE can be used to create a table but not add data to it.
    WRITE is not a valid SQL command.

  3. D. A nonrelational database is schemaless, meaning that there’s no need to predefine all the
    types of data you’ll store in a table. This doesn’t preclude you from storing data with a fixed
    structure, as nonrelational databases can store virtually any kind of data. A primary key is
    required to uniquely identify each item in a table. Creating multiple tables is allowed, but
    most applications that use nonrelational databases use only one table.

  4. C. A no-SQL database is another term for a nonrelational database. By definition, nonrelational
    databases are schemaless and must use primary keys. There’s no such thing as a schemaless
    relational database. No-SQL is never used to describe a relational database of any kind.

  5. B. RDS instances use EBS volumes for storage. They no longer can use magnetic storage.
    Instance volumes are for temporary, not database storage. You can take a snapshot of a
    database instance and restore it to a new instance with a new EBS volume, but an RDS
    instance can’t use a snapshot directly for database storage.

  6. B, D. PostgreSQL and Amazon Aurora are options for RDS database engines. IBM dBase
    and the nonrelational databases DynamoDB and Redis are not available as RDS database
    engines.

  7. A, B. Aurora is Amazon’s proprietary database engine that works with existing
    PostgreSQL and MySQL databases. Aurora doesn’t support MariaDB, Oracle, or Microsoft
    SQL Server.

  8. B, C. Multi-AZ and snapshots can protect your data in the event of an Availability Zone
    failure. Read replicas don’t use synchronous replication and may lose some data. IOPS is a
    measurement of storage throughput. Vertical scaling refers to changing the instance class
    but has nothing to do with preventing data loss.

  9. B. Amazon Aurora uses a shared storage volume that automatically expands up to 64 TB.
    The Microsoft SQL Server and Oracle database engines don’t offer this. Amazon Athena is
    not a database engine.

  10. A. Multi-AZ lets your database withstand the failure of an RDS instance, even if the
    failure is due to an entire Availability Zone failing. Read replicas are a way to achieve
    horizontal scaling to improve performance of database reads but don’t increase availability.
    Point-in-time recovery allows you to restore a database up to a point in time but doesn’t
    increase availability.

  11. B, D. A partition is an allocation of storage backed by solid-state drives and replicated
    across multiple Availability Zones. Tables are stored across partitions, but tables do not

    Chapter 10: The Core Networking Services 245


    contain partitions. A primary key, not a partition, is used to uniquely identify an item
    in a table.

  12. A. The minimum monthly availability for DynamoDB is 99.99 percent in a single Region.
    It’s not 99.95 percent, 99.9 percent, or 99.0 percent.

  13. D. Items in a DynamoDB table can have different attributes. For example, one item can
    have five attributes, while another has only one. A table can store items containing multiple
    data types. There’s no need to predefine the number of items in a table. Items in a table
    can’t have duplicate primary keys.

  14. C, E. Increasing WCU or enabling Auto Scaling will improve write performance against
    a table. Increasing or decreasing RCU won’t improve performance for writes. Decreasing
    WCU will make write performance worse.

  15. C. A scan requires reading every partition on which the table is stored. A query occurs
    against the primary key, enabling DynamoDB to read only the partition where the matching
    item is stored. Writing and updating an item are not read-intensive operations.

  16. D. A primary key must be unique within a table. A full name, phone number, or city may
    not be unique, as some customers may share the same name or phone number. A randomly
    generated customer ID number would be unique and appropriate for use as a primary key.

  17. B. Dense compute nodes use magnetic disks. Dense storage nodes use SSDs. There are no
    such nodes as dense memory or cost-optimized.

  18. A. Redshift Spectrum can analyze structured data stored in S3. There is no such service
    as Redshift S3. Amazon Athena can analyze structured data in S3, but it’s not a feature of
    Redshift. Amazon RDS doesn’t analyze data stored in S3.

  19. B. A data warehouse stores large amounts of structured data from other relational
    databases. It’s not called a data storehouse or a report cluster. Dense storage node is a type
    of Redshift compute node.

  20. A. Dense storage nodes can be used in a cluster to store up to 2 PB of data. Dense compute
    nodes can be used to store up to 326 TB of data.


Chapter 10: The Core Networking Services

  1. B, D. For each account, AWS creates a default VPC in each Region. A VPC spans all
    Availability Zones within a Region. VPCs do not span Regions.

  2. A. A VPC or subnet CIDR can have a size between /16 and /28 inclusive, so 10.0 0.0/28

    would be the only valid CIDR.

  3. B, C. A subnet exists in only one Availability Zone, and it must have a CIDR that’s a
    subset of CIDR of the VPC in which it resides. There’s no requirement for a VPC to have
    two subnets, but it must have at least one.

    246 Appendix A Answers to Review Questions


  4. C. When you create a security group, it contains an outbound rule that allows access to
    any IP address. It doesn’t contain an inbound rule by default. Security group rules can only
    permit access, not deny it, so any traffic not explicitly allowed will be denied.

  5. B, D. A network access control list is a firewall that operates at the subnet level. A security
    group is a firewall that operates at the instance level.

  6. B. A VPC peering connection is a private connection between only two VPCs. It uses the
    private AWS network, and not the public internet. A VPC peering connection is different
    than a VPN connection.

  7. A, B. A Direct Connect link uses a dedicated link rather than the internet to provide
    predictable latency. Direct Connect doesn’t use encryption but provides some security by
    means of a private link. A VPN connection uses the internet for transport, encrypting
    data with AES 128- or 256-bit encryption. A VPN connection doesn’t require proprietary
    hardware.

  8. B, D. When you register a domain name, you can choose a term between 1 year and 10
    years. If you use Route 53, it will automatically create a public hosted zone for the domain.
    The registrar and DNS hosting provider don’t have to be the same entity, but often are.

  9. B. A Multivalue Answer routing policy can return a set of multiple values, sorted randomly.
    A simple record returns a single value. A Failover routing policy always routes users to the
    primary resource unless it’s down, in which case it routes users to the secondary resource.
    A Latency routing policy sends users to the resource in the AWS Region that provides the
    least latency.

  10. C. All Route 53 routing policies except for Simple can use health checks.

  11. C. An Endpoint health check works by connecting to the monitored endpoint via HTTP,
    HTTPS, or TCP. A CloudWatch alarm health check simply reflects the status of a
    CloudWatch alarm. A Calculated health check derives its status from multiple other health
    checks. There is no such thing as a Simple health check.

  12. A. A Weighted routing policy lets you distribute traffic to endpoints according to a ratio
    that you define. None of the other routing policies allows this.

  13. B. A private hosted zone is associated with a VPC and allows resources in the VPC to
    resolve private domain names. A public hosted zone is accessible by anyone on the internet.
    Domain name registration is for public domain names. Health checks aren’t necessary for
    name resolution to work.

  14. A. Route 53 private hosted zones provide DNS resolution for a single domain name within
    multiple VPCs. Therefore, to support resolution of one domain names for two VPCs, you’d
    need one private hosted zone.

  15. B. CloudFront has edge locations on six continents (Antarctica is a hard place to get to).

  16. B. A CloudFront origin is the location that a distribution sources content from. Content is
    stored in edge locations. A distribution defines the edge locations and origins to use.

  17. B. The RTMP distribution type is for delivering streaming content and requires
    you to provide a media player. A Web distribution can also stream audio or video

    Chapter 11: Automating Your AWS Workloads 247


    content but doesn’t require you to provide a media player. Streaming and Edge are not
    distribution types.

  18. A. The more edge locations you use for a distribution, the more you’ll pay. Selecting the
    minimum number of locations will be the most cost effective.

  19. B. There are more than 150 edge locations throughout the world.

  20. A, B. An origin can be an EC2 instance or a public S3 bucket. You can’t use a private S3
    bucket as an origin.


Chapter 11: Automating Your
AWS Workloads

  1. C. CloudFormation can create AWS resources and manages them collectively in a stack.
    Templates are written in the CloudFormation language, not Python. CloudFormation can’t
    create resources outside of AWS. It also doesn’t prevent manual changes to resources in a
    stack.

  2. B, D. CloudFormation templates are written in the YAML or JSON format.

  3. A. Parameters let you input customizations when creating a CloudFormation stack
    without having to modify the underlying template. Parameters don’t prevent stack updates
    or unauthorized changes. A template can be used to create multiple stacks, regardless of
    whether it uses parameters.

  4. A, B. Resources CloudFormation creates are organized into stacks. When you update
    a stack, CloudFormation analyzes the relationships among resources in the stack and

    updates dependent resources as necessary. This does not, however, mean that any resource
    you create using CloudFormation will work as you expect. Provisioning resources using
    CloudFormation is not necessarily faster than using the AWS CLI.

  5. A, C. CodeCommit is a private Git repository that offers versioning and differencing. It
    does not perform deployments.

  6. B. Differencing lets you see the differences between two versions of a file, which can be
    useful when figuring out what change introduced a bug. Versioning, not differencing, is
    what allows reverting to an older version of a file. Differencing doesn’t identify duplicate
    lines of code or tell you when an application was deployed.

  7. D. Continuous integration is the practice of running code through a build or test process
    as soon as it’s checked into a repository. Continuous delivery and continuous deployment
    include continuous integration but add deployment to the process. Differencing only shows
    the differences between different versions of a file but doesn’t perform any testing.

  8. B, D. Build.general1.medium and build.general1.large support Windows and Linux
    operating systems. Build.general1.small supports Linux only. The other compute types
    don’t exist.

    248 Appendix A Answers to Review Questions


  9. A, B. A CodeBuild build environment always contains an operating system and a Docker
    image. It may contain the other components but doesn’t have to.

  10. A, B, C. CodeDeploy can deploy application files to Linux or Windows EC2 instances
    and Docker containers to ECS. It can’t deploy an application to smartphones, and it can’t
    deploy files to an S3 bucket.

  11. B. At the very least, a CodePipeline must consist of a source stage and a deploy stage.

  12. D. A launch template can be used to launch instances manually and with EC2 Auto
    Scaling. A launch configuration can’t be used to launch instances manually. An instance
    role is used to grant permissions to applications running on an instance. Auto Scaling can’t
    provision instances using a CloudFormation template.

  13. A, D. The maximum and minimum group size values limit the number of instances in an
    Auto Scaling group. The desired capacity (also known as the group size) is the number

    of instances that Auto Scaling will generally maintain, but Auto Scaling can launch or
    terminate instances if dynamic scaling calls for it.

  14. B. Auto Scaling will use self-healing to replace the failed instance to maintain the desired
    capacity of 7. Terminating an instance or failing to replace the failed one will result in

    6 instances. Auto Scaling won’t ever change the desired capacity in response to a failed
    instance.

  15. A. Predictive scaling creates a scheduled scaling action based on past usage patterns.
    Scheduled scaling and dynamic scaling do not create scheduled scaling actions. There is no
    such thing as pattern scaling.

  16. B. A Command document can execute commands on an EC2 instance. An Automation
    document can perform administrative tasks on AWS, such as starting or stopping an
    instance. There is no such thing as a Script document or a Run document.

  17. D. An Automation document can perform administrative tasks on AWS, such as starting or
    stopping an instance. A Command document can execute commands on an EC2 instance.
    There is no such thing as a Script document or a Run document.

  18. B. AWS OpsWorks Stacks uses Chef recipes, while AWS OpsWorks for Puppet Enterprise
    uses Puppet modules. There is no service called AWS OpsWorks Layers or AWS OpsWorks
    for Automation.

  19. B, D. OpsWorks supports the Puppet Enterprise and Chef configuration management
    platforms. It doesn’t support SaltStack, Ansible, or CFEngine.

  20. C. Only an OpsWorks layer contains at least one EC2 instance. There’s no such thing as an
    EC2 Auto Scaling layer.


Chapter 12: Common Use-Case Scenarios

  1. C. The five pillars of the Well-Architected Framework are reliability, performance efficiency,
    security, cost optimization, and operational excellence. Resiliency is not one of them.

    Chapter 12: Common Use-Case Scenarios 249


  2. A, D. Security is about protecting the confidentiality, integrity, and availability of data.
    Granting each AWS user their own IAM username and password makes it possible to
    ensure the confidentiality of data. Enabling S3 versioning protects the integrity of data by
    maintaining a backup of an object. Deleting an empty S3 bucket doesn’t help with any of
    these. It’s not possible to create a security group rule that denies access to unused ports
    since security groups deny any traffic that’s not explicitly allowed.

  3. C, D. Preventing the accidental termination of an EC2 instance in the Auto Scaling group
    can avoid overburdening and causing performance issues on the remaining instance, espe-
    cially during busy times. Using CloudFront can help improve performance for end users by
    caching the content in an edge location close to them. Doubling the number of instances
    might improve performance, but because performance is already acceptable, doing this
    would be inefficient. Monitoring for unauthorized access alone won’t improve performance
    or performance efficiency.

  4. A, C. Deleting unused S3 objects and unused application load balancers can reduce costs
    since you’re charged for both. Deleting unused VPCs and empty S3 buckets won’t reduce
    costs since they don’t cost anything.

  5. B. Operational excellence is concerned with strengthening the other four pillars of
    reliability, performance efficiency, security, and cost optimization; automation is the key to
    achieving each of these. Improving bad processes and making people work longer hours run
    counter to achieving operational excellence. Adding more security personnel may be a good
    idea, but it isn’t a key component of operational excellence.

  6. B. In a default VPC, AWS creates a subnet for each Availability Zone in the Region. Hence,
    if there are three subnets in the default VPC, there must be three Availability Zones.

  7. A. Application load balancer listeners use security groups to control inbound access, so you
    need to apply a security group that has an inbound rule allowing HTTP access. Applying
    the security group rule to the database instance won’t help, since users don’t connect
    directly to the database instance. You can’t apply a security group to a subnet, only a
    network access control list.

  8. A. An application load balancer can use health checks to identify failed instances and
    remove them from load balancing. This can prevent a user from ever reaching a failed
    instance. A load balancer can’t replace a failed instance, but Auto Scaling can. An
    application load balancer distributes traffic to instances using a round-robin algorithm, not
    based on how busy those instances are. An application load balancer doesn’t cache content.

  9. D. A launch template tells Auto Scaling how to configure the instances it provisions. A
    dynamic scaling policy controls how Auto Scaling scales in and out based on CloudWatch
    metrics. There’s no such thing as a launch directive. Auto Scaling does not reference a
    CloudFormation template, but you can use a CloudFormation template to create a stack
    that contains a launch template.

  10. B. The maximum group size limits the number of instances in the group. Setting the group
    size (also known as the desired capacity) or minimum group size to 5 would increase the
    number of instances to 5 but would not stop Auto Scaling from subsequently adding more
    instances. Deleting the target tracking policy would not necessarily prevent the number of
    instances in the group from growing, as another process such as a scheduled scaling policy
    could add more instances to the group.

    250 Appendix A Answers to Review Questions


  11. B. A static website serves content just as it’s stored without changing the content on the fly.
    A WordPress blog, a social media website, and a web-based email application all compile
    content from a database and mix it in with static content before serving it up to the user.

  12. A, C. Objects you upload to an S3 bucket are not public by default, nor are they
    accessible to all AWS users. Even if you try to make an object public using an ACL, S3 will
    immediately remove the ACL, but you can disable this behavior. S3 never removes objects
    by default.

  13. A. To have S3 host your static website, you need to enable bucket hosting in the S3 service
    console. It’s not necessary to disable or enable default encryption or object versioning.
    There’s also no need to make all objects in the bucket public, but only those that you want
    S3 to serve up.

  14. B. Purchasing and using a custom domain name is the best option for a friendly URL. You
    need to name the bucket the same as the domain name. Creating a bucket name with only
    words is unlikely to work, regardless of Region, as bucket names must be globally unique.
    A bucket name can’t start with a number.

  15. A. Websites hosted in S3 are served using unencrypted HTTP, not secure HTTPS. The
    content is publicly readable, but that doesn’t mean the public can modify it. You don’t have
    to use a custom domain name, as S3 provides an endpoint URL for you. A website hosted in
    S3 is stored in a bucket, and a bucket exists in only one Region.

  16. C. The reliability of an application can be impacted by the failure of resources the
    application depends on. One way a resource can fail is if it’s misconfigured. Taking EBS
    snapshots of an instance or provisioning more instances than you need won’t impact
    reliability. The user interface being difficult to use might be an annoyance for the user but
    doesn’t affect the actual reliability of the application.

  17. C. You may have control over your VPC, but the rest of the network between your
    application and users on the internet is not under your control. Compute, storage, and any
    database your application uses are, or at least theoretically could be, under your control.

  18. D. An Auto Scaling group can use an ELB health check to determine whether an instance is
    healthy. There is no such thing as an S3 health check, a VPC health check, or an SNS health
    check.

  19. B. You’re responsible for S3 charges related to your static website. You’re not charged
    for compute with S3. No one may modify the content of your site unless you give them

    permission. The S3 Standard storage class keeps objects in multiple Availability Zones, so
    the outage of one won’t affect the site.

  20. A. The format of the URL is the bucket name, followed by s3-website-, the Region
    identifier, and then
    amazonaws.com.


    Assessment Test

    1. Which Virtual Private Network (VPN) protocols are supported under the AWS managed
      VPN connection option?

      1. Internet Protocol Security (IPsec)

      2. Generic Routing Encapsulation (GRE)

      3. Dynamic Multipoint VPN (DMVPN)

      4. Layer 2 Tunneling Protocol (L2TP)

    2. How will you vertically-scale Virtual Private Network (VPN) throughput in a Virtual
      Private Cloud (VPC) when terminating the VPN on Amazon Elastic Compute Cloud
      (Amazon EC2) with minimal downtime?

      1. Attach multiple elastic network interfaces to the existing Amazon EC2 instance
        responsible for VPN termination.

      2. Stop the Amazon EC2 instance and change the instance type to a larger instance type.
        Start the instance.

      3. Take a snapshot of the instance. Launch a new, larger instance using this snapshot, and
        move the Elastic IP address from the existing instance to the new instance.

      4. Launch a new Amazon EC2 instance of a larger instance type. Move the Amazon Elas-
        tic Block Store (Amazon EBS) disk from the existing instance to the new instance.

    3. Which of the following is required to create a 1 Gbps AWS Direct Connect connection?

      1. Open Shortest Path First (OSPF)

      2. 802.1Q Virtual Local Area Network (VLAN)

      3. Bidirectional Forwarding Detection (BFD)

      4. Single-mode fiber

    4. The Letter of Authorization – Connecting Facility Assignment (LOA-CFA) document
      downloaded via the AWS Management Console provides the AWS Direct Connect location
      provider with which of the following?

      1. The cross-connect port detail for the AWS end of the connection

      2. The cross-connect port detail for the customer end of the connection

      3. The cross-connect’s assigned AWS Region

      4. The billing address for the cross-connect

    5. You have a three-tier web application. You have to move this application to AWS. As a first
      step, you decide to move the web layer to AWS while keeping the application and database
      layer on-premises. During initial phases of this migration, the web layer will have servers
      both in AWS and on-premises. How will you architect this setup? (Choose two.)

      1. Set up an AWS Direct Connect private Virtual Interface (VIF).

      2. Use Network Load Balancer to distribute traffic to the web layer on-premises and in
        the Virtual Private Cloud (VPC).

        Assessment Test xlv


      3. Set up an AWS Direct Connect public VIF.

      4. Set up an IP Security (IPsec) Virtual Private Network (VPN) from on-premises to AWS,
        terminating at the Virtual Private Gateway (VGW).

      5. Use Classic Load Balancer to distribute traffic to the web layer on-premises and in
        the VPC.

    6. You have set up a transit Virtual Private Cloud (VPC) architecture. You are connected to
      the hub VPC using AWS Direct Connect and a detached Virtual Private Gateway (VGW).
      You want all hybrid IT traffic to the production spoke VPC to pass through the transit
      hub VPC. You also want on-premises traffic to the test VPC to bypassing the transit VPC,

      reaching the test spoke VPC directly. How will you architect this solution, considering least
      latency and maximum security?

      1. Set up an AWS Direct Connect private Virtual Interface (VIF) to an AWS Direct Con-
        nect Gateway. Attach the VGW of the test VPC to the AWS Direct Connect Gateway.

      2. Assign public IP addresses to the Amazon Elastic Compute Cloud (Amazon EC2)
        instance in the test VPC, and access these resources using the public IP addresses over
        AWS Direct Connect public VIF.

      3. Set up a VPN from a detached VGW to an Amazon EC2 instance in the test VPC.

      4. Set up a VPN from the detached VGW to the VGW of the test VPC.

    7. You have created a Virtual Private Cloud (VPC) with an IPv4 CIDR of 10.0.0.0/27. What is
      the maximum number of IPv4 subnets that you can create?

      1. 1

      2. 2

      3. 3

      4. 4

    8. You create a new Virtual Private Cloud (VPC) in us-east-1 and provision three subnets
      inside this VPC. Which of the following statements is true?

      1. By default, these subnets will not be able to communicate with each other; you will
        need to create routes.

      2. All subnets are public by default.

      3. All subnets will have a route to one another.

      4. Each subnet will have identical Classless Inter-Domain Routing (CIDR) blocks.

    9. Your networking group has decided to migrate all of the 192.168.0.0/16 Virtual Private
      Cloud (VPC) instances to 10.0.0.0/16. Which of the following is a valid option?

      1. Add a new 10.0.0.0/16 Classless Inter-Domain Routing (CIDR) range to the 192.168.0.0/16
        VPC. Change the existing addresses of instances to the 10.0.0.0/16 space.

      2. Change the initial VPC CIDR range to the 10.0.0.0/16 CIDR.

      3. Create a new 10.0.0.0/16 VPC. Use VPC peering to migrate workloads to the new VPC.

      4. Use Network Address Translation (NAT) in the 192.168.0.0/16 space to the
        10.0.0.0/16 space using NAT Gateways.

        xlvi Assessment Test


    10. What do Amazon CloudFront Origin Access Identities (OAIs) do?

      1. Increase the performance of Amazon CloudFront by preloading video streams.

      2. Allow the use of Network Load Balancer as an origin server.

      3. Restrict access to Amazon Elastic Compute Cloud (Amazon EC2) web instances.

      4. Restrict access to an Amazon Simple Storage Service (Amazon S3) bucket to only spe-
        cial Amazon CloudFront users.

    11. Which types of distributions are required to support Amazon CloudFront Real-Time Mes-
      saging Protocol (RTMP) media streaming? (Choose two.)

      1. An RTMP distribution for the media files

      2. A web distribution for the media player

      3. A web distribution for the media files

      4. An RTMP distribution for media files and the media player

      5. Amazon CloudFront does not support RTMP streaming.

    12. Voice calls to international numbers from inside your company must go through an open-
      source Session Border Controller (SBC) installed on a custom Linux Amazon Machine
      Image (AMI) in your Virtual Private Cloud (VPC) public subnet. The SBC handles the
      real-time media and voice signaling. International calls often have garbled voice, and it is
      difficult to understand what people are saying. What may increase the quality of interna-
      tional voice calls?

      1. Place the SBC in a placement group to reduce latency.

      2. Add additional network interfaces to the instance.

      3. Use an Application Load Balancer to distribute load to multiple SBCs.

      4. Enable enhanced networking on the instance.

    13. Your big data team is trying to determine why their proof of concept is running slowly. For
      the demo, they are trying to ingest 100 TB of data from Amazon Simple Storage Service
      (Amazon S3) on their c4.8xl instance. They have already enabled enhanced networking.
      What should they do to increase Amazon S3 ingest rates?

      1. Run the demo on premises, and access Amazon S3 from AWS Direct Connect to reduce
        latency.

      2. Split the data ingest on more than one instance, such as two c4.4xl instances.

      3. Place the instance in a placement group, and use an Amazon S3 endpoint.

      4. Place a Network Load Balancer between the instance and Amazon S3 for more effi-
        cient load balancing and better performance.

    14. An AWS CloudFormation change set can be used for which of the following purposes?
      (Choose two.)

      1. Checking if an existing resource has been altered outside of AWS CloudFormation.

      2. Examining the differences between the current stack and a new template.

      3. Specifying which changes are to be applied to a stack from a new template by editing
        the change set.

        Assessment Test xlvii


      4. Rolling back a previous update to an existing stack.

      5. Executing a stack update after changes are approved in a continuous delivery pipeline.

    15. You have created an AWS CloudFormation stack to manage network resources in an
      account with the intent of allowing unprivileged users to make changes to the stack. When
      a user attempts to make a change and update the stack, however, the user gets a permission
      denied error when a resource is updated. What might be the cause?

      1. The stack does not have a stack policy attached to it that allows updates.

      2. The user does not have permission to invoke the CloudFormation:UpdateStack
        Application Programming Interface (API).

      3. The template does not have a stack policy attached to it that allows updates.

      4. The stack does not have an AWS Identity and Access Management (IAM) service role
        attached to it that allows updates.

    16. You are trying to resolve host names from an instance in VPC A for instances that resides
      in VPC B. The two VPCs are peered within the same region. What action must be taken to
      enable this?

      1. Disable DNS host names by setting the enableDnsHostnames value to false in VPC B,
        the peered VPC.

      2. Enable the value for Allow DNS Resolution from Peer VPC for the VPC peering
        connection.

      3. Build an IP Security (IPsec) tunnel from an instance in the VPC A to the VGW of VPC
        B to allow DNS resolution between the VPCs.

      4. Build your own DNS resolver in VPC B, and point VPC A’s instances to this resolver.

    17. When using Amazon Route 53, the EDNS0 extension is used when you want to do which of
      the following?

      1. Adjust the Time To Live (TTL) of Domain Name System (DNS) records.

      2. Increase the accuracy of geolocation routing by adding optional extensions to the DNS
        protocol.

      3. Increase the accuracy of geolocation routing by removing unneeded extensions to the
        DNS protocol.

      4. Create a geolocation resource record set in a private hosted zone.

    18. What happens when you associate an Amazon CloudFront distribution with an AWS
      Lambda@Edge function?

      1. AWS Lambda is deployed in your Virtual Private Cloud (VPC).

      2. AWS Lambda@Edge will create an Amazon Simple Notification Service (Amazon SNS)
        topic for email notification.

      3. Amazon CloudFront intercepts requests and responses at Amazon CloudFront
        Regional Edge Caches.

      4. Amazon CloudFront intercepts requests and responses at Amazon CloudFront edge
        locations.

        xlviii Assessment Test


    19. After deploying Amazon RDS in a new subnet within a VPC, application developers report
      that they cannot connect to the database from another subnet within the VPC. What action
      must be taken?

      1. Create a VPC peering connection to the Amazon RDS subnets.

      2. Enable Multi-AZ deployment.

      3. Create a route to the Amazon RDS instance subnets.

      4. Add the application server security group to the Amazon RDS inbound security group.

    20. Which of the following techniques is used to mitigate the impact on Amazon Route 53 of
      malicious actors?

      1. Classifying and prioritizing requests from users who are known to be reliable

      2. Leveraging customer-provided whitelist/blacklist IP addresses

      3. Blocking traffic using customer-defined Amazon Route 53 security groups

      4. Redirecting suspicious DNS requests to honeypot responders

    21. You are responsible for your company’s AWS resources, and you notice a significant
      amount of traffic from an IP address in a foreign country in which your company does not
      have customers. Further investigation of the traffic indicates that the source of the traffic is
      scanning for open ports on your Amazon Elastic Compute Cloud (Amazon EC2) instances.
      Which one of the following resources can deny the IP address from reaching the instances
      in your VPC?

      1. Security group

      2. Internet gateway (IGW)

      3. Network Access Control List (ACL)

      4. AWS PrivateLink

    22. AWS uses what framework to provide independent confirmation around the efficacy of
      guest-to-guest separation on Amazon Elastic Compute Cloud (Amazon EC2) hypervisors?

      1. Health Insurance Portability and Accountability Act (HIPAA)

      2. International Organization for Standardization (ISO) 27001

      3. Service Organization Controls (SOC) 2

      4. Payment Card Industry Data Security Standard (PCI DSS)

    23. You place an application load balancer in front of two web servers that are stateful. Users
      begin to report intermittent connectivity issues when accessing the website. Why is the site
      not responding?

      1. The website needs to have port 443 open.

      2. Sticky sessions must be enabled on the application load balancer.

      3. The web servers need to have their security group set to allow all Transmission Control
        Protocol (TCP) traffic from 0.0.0.0/0.

      4. The network Access Control List (ACL) on the subnet needs to allow a stateful connection.

        Assessment Test xlix


    24. You create a new instance, and you are able connect over Secure Shell (SSH) to its private IP
      address from your corporate network. The instance does not have Internet access, however.
      Your internal policies forbid direct access to the Internet. What is required to enable access
      to the Internet?

      1. Assign a public IP address to the instance.

      2. Ensure that port 80 and port 443 are not set to DENY in the instance security group.

      3. Deploy a Network Address Translation (NAT) gateway in the private subnet.

      4. Make sure that there is a default route in the subnet route table that goes to your on-
        premises network.

    25. You create Virtual Private Cloud (VPC) peering connections between VPC A and VPC B
      and between VPC B and VPC C. You can communicate between VPC A and VPC B and
      communicate between VPC B and VPC C, but not between VPC A and VPC C. What must
      be done to allow traffic between VPC A and VPC C?

      1. Create a network Access Control List (ACL) to allow the traffic.

      2. Create an additional peering connection between VPC A and VPC C.

      3. Update the route tables in VPC A and VPC C.

      4. Add a rule to the security groups on VPC A and VPC C.


Answers to Assessment Test

  1. A. Only IPsec is a supported VPN protocol.

  2. C. To vertically-scale, you need to change the instance type to a larger instance. Setting
    up a standby instance and moving the IP to this instance will result in the least amount of
    downtime. The downtime will be equal to the time required for the instance to re-create
    Internet Protocol Security (IPsec) tunnels and establish Border Gateway Protocol (BGP)

    neighbor relationships. This will be done automatically, or it will have to be initiated manu-
    ally by you depending on the software on the Amazon EC2 instance. If you stop an existing
    instance and change its instance type, you also suffer the additional downtime required to
    boot an instance.

  3. D. AWS Direct Connect supports 1000BASE-LX or 10GBASE-LR connections over single
    mode fiber using Ethernet transport. Your device must support 802.1Q VLANs; however,
    the use of 802.1Q is required for creating the virtual interface. It is not required for creating
    the connection.

  4. A. A LOA-CFA provides details of the port assignment on the AWS side of the cross-
    connect with full demarcation and interface details. It is the customer’s responsibility to
    provide details for their end of the cross-connect. No other region or customer information
    is provided on the document.

  5. A, B. Setting up AWS Direct Connect private VIF will enable connectivity to the VPC.
    Using the connectivity Network Load Balancer will load balance traffic to servers in the
    VPC and those on-premises.

  6. A. The test VPC can be accessed directly over private VIF. It is not a good practice to
    access Amazon EC2 instances using public IPs when a more secure alternative exists.
    Option C is possible, but it induces additional latencies.

  7. B. The minimum size subnet that you can have in a VPC is /28. A /27 Classless Inter-
    Domain Routing (CIDR) may contain two /28 subnets.

  8. C. When you provision a VPC, each route table has an immutable local route that allows
    all subnets to route traffic to one another.

  9. C. You cannot add different RFC1918 CIDR ranges to an existing VPC, and you also can-
    not use new CIDR ranges on existing subnets. In addition, NAT Gateways will not support
    custom NAT. The only option presented that works is peering to a new VPC.

  10. D. This is the easiest way to ensure that content in an Amazon S3 bucket is only accessed
    by Amazon CloudFront.

  11. A, B. When using an RTMP distribution for Amazon CloudFront, you need to provide
    both your media files and a media player to your end users with your distribution. You need
    two types of distributions: a web distribution to serve the media player and an RTMP dis-
    tribution for the media files.

    Answers to Assessment Test li


  12. D. Enhanced networking can help reduce jitter and network performance. Placement groups
    and lower latency will not assist with flows leaving the VPC. Network interfaces do not affect
    network performance. An Application Load Balancer will not assist with performance issues.

  13. B. Using more than one instance will increase the performance because any given flow

    to Amazon S3 will be limited to 25 Gbps. Moving the instance will not increase Amazon S3
    bandwidth. Placement groups will not increase Amazon S3 bandwidth either.

    Amazon S3 cannot be natively placed behind a Network Load Balancer.

  14. B, E. AWS CloudFormation change sets are computed from the differences between an
    existing stack and a new template. This can be subsequently applied to update the stack.
    AWS CloudFormation does not inspect the underlying resources to see if they have been
    altered. Change sets cannot be edited or reversed.

  15. D. A stack can have an IAM service role attached to it that specifies the actions that AWS
    CloudFormation is allowed to perform while managing the stack. If the stack does not
    have an attached IAM service role, then the stack uses the caller’s credentials—those of
    the unprivileged user in this case. Stack policies can also allow resources to be preserved,
    but all actions are permitted without a policy. If the user did not have permission to call
    CloudFormation:UpdateStack, then the error would have occurred before any resource
    updates were attempted.

  16. B. DNS resolution is supported over VPC peering connections; however, DNS resolution
    must be enabled for the peering connection.

  17. B. To improve the accuracy of geolocation routing, Amazon Route 53 supports the edns-
    client-subnet extension of EDNS0.

  18. D. When you associate an Amazon CloudFront distribution with an AWS Lambda@Edge
    function, Amazon CloudFront intercepts requests and responses at Amazon CloudFront
    edge locations. Lambda@Edge functions execute in response to Amazon CloudFront events
    in the region or edge location that is closest to your customer.

  19. D. Security groups control access to Amazon RDS.

  20. A. AWS edge locations classify and prioritize traffic to mitigate the impact of malicious
    actors.

  21. C. Network ACL rules can deny traffic.

  22. D. The PCI DSS audit report contains statements about guest-to-guest separation in the AWS
    hypervisor. If this guest-to-guest separation assurance is insufficient for your own threat
    model, Amazon Elastic Compute Cloud (Amazon EC2) Dedicated Instances are also available.

  23. B. Sticky sessions will enable a session to be kept with the same web server to facilitate
    stateful connections.

  24. D. Because you can access the instance but not the Internet, there is not a default route to
    the Internet through the on-premises network.

  25. B. VPC peering connections are not transitive.

12 Chapter 1 Introduction to Advanced Networking


Review Questions

  1. Which of the following services provides private connectivity between AWS and your
    data center, office, or colocation environment?

    1. Amazon Route 53

    2. AWS Direct Connect

    3. AWS WAF

    4. Amazon Virtual Private Cloud (Amazon VPC)

  2. Which AWS Cloud service uses edge locations to deliver content to end users?

    1. Amazon Virtual Private Cloud (Amazon VPC)

    2. AWS Shield

    3. Amazon CloudFront

    4. Amazon Elastic Compute Cloud (Amazon EC2)

  3. Which of the following statements is true?

    1. AWS Regions consist of multiple edge locations.

    2. Edge locations consist of multiple Availability Zones.

    3. Availability Zones consist of multiple AWS Regions.

    4. AWS Regions consist of multiple Availability Zones.

  4. Which of the following describes a physical location around the world where AWS clusters
    data centers?

    1. Endpoint

    2. Collection

    3. Fleet

    4. Region

  5. What feature of AWS Regions allows you to operate production systems that are more
    highly available, fault-tolerant, and scalable than is possible using a single data center?

    1. Availability Zones

    2. Replication areas

    3. Geographic districts

    4. Compute centers

  6. What AWS Cloud service provides a logically-isolated section of the AWS Cloud where you
    can launch AWS resources in a logical network that you define?

    1. Amazon Simple Workflow Service (Amazon SWF)

    2. Amazon Route 53

    3. Amazon Virtual Private Cloud (Amazon VPC)

    4. AWS CloudFormation

      Review Questions 13


  7. Which AWS Cloud service provides Distributed Denial of Service (DDoS) mitigation?

    1. AWS Shield

    2. Amazon Route 53

    3. AWS Direct Connect

    4. Amazon Elastic Compute Cloud (Amazon EC2)

  8. How many companies operate the AWS global infrastructure?

    1. 1

    2. 2

    3. 3

    4. 4

  9. Amazon Virtual Private Cloud (Amazon VPC) enables which one of the following?

    1. Connectivity from your on-premises network

    2. Creation of a logical network defined by you

    3. Edge caching of user content

    4. Network threshold alarms

  10. Which Amazon Virtual Private Cloud (Amazon VPC) component maintains a current
    topology map of the customer environment?

    1. Route table

    2. Mapping service

    3. Border Gateway Protocol (BGP)

    4. Interior Gateway Protocol (IGP)

  11. You may specify which of the following when creating a Virtual Private Cloud (VPC)?

    1. AWS data centers to use

    2. 802.1x authentication methods

    3. Virtual Local Area Network (VLAN) tags

    4. IPv4 address range

  12. Amazon Route 53 allows you to perform which one of the following actions?

    1. Create subnets

    2. Register domains

    3. Define route tables

    4. Modify stateful firewalls

      14 Chapter 1 Introduction to Advanced Networking


  13. Which service provides a more consistent network experience when connecting to AWS
    from your corporate network?

    1. AWS Direct Connect

    2. Amazon CloudFront

    3. Internet-based Virtual Private Network (VPN)

    4. Amazon Route 53

  14. Which AWS Cloud service enables you to define customizable web security rules?

    1. Amazon Route 53

    2. AWS Shield

    3. AWS WAF

    4. GuardDuty

  15. Which service increases the fault tolerance of your Amazon Elastic Compute Cloud
    (Amazon EC2) applications on AWS?

    1. AWS Direct Connect

    2. Elastic Load Balancing

    3. AWS Shield

    4. AWS WAF

Review Questions 55


Review Questions

  1. You are a solutions architect working for a large travel company that is migrating its exist-
    ing server estate to AWS. You have recommended that they use a custom Virtual Private
    Cloud (VPC), and they have agreed to proceed. They will need a public subnet for their
    web servers and a private subnet for their databases. They also require the web servers and
    database servers to be highly available, and there is a minimum of two web servers and two
    database servers each. How many subnets should you have to maintain high availability?

    1. 2

    2. 3

    3. 4

    4. 1

  2. You launch multiple Amazon Elastic Compute Cloud (Amazon EC2) instances into a pri-
    vate subnet. These instances need to access the Internet to download patches. You decide to
    create a Network Address Translation (NAT) gateway. Where in the VPC should the NAT
    gateway reside?

    1. In the private subnet

    2. In the public subnet

    3. In the Virtual Private Gateway (VGW)

    4. In the Internet gateway

  3. You are supporting a customer that executes tightly coupled High Performance Computing
    (HPC) workloads. What Virtual Private Cloud (VPC) option provides high-throughput,
    low-latency, and high packet-per-second performance?

    1. NIC Teaming

    2. 25 Gbps Ethernet

    3. IPv6 addressing

    4. Placement groups

  4. What happens when you create a new Virtual Private Cloud (VPC)?

    1. A main route table is created by default.

    2. Three subnets are created by default, one for each Availability Zone.

    3. Three subnets are created by default in one Availability Zone.

    4. An Internet gateway is created by default.

  5. How many Internet gateways can you attach to an Virtual Private Cloud (VPC) at any
    one time?

    1. 1

    2. 2

    3. 3

    4. 4

      56 Chapter 2 Introduction to Amazon VPC


  6. What aspect of a Virtual Private Cloud (VPC) is stateful?

    1. Network Access Control Lists (ACLs)

    2. Security groups

    3. VPC Flow Logs

    4. Prefix list

  7. Which of the following exposes the Amazon side of a Virtual Private Network (VPN)
    connection?

    1. An Elastic IP address

    2. A customer gateway

    3. An Internet gateway

    4. A Virtual Private Gateway (VGW)

  8. Which Amazon Virtual Private Cloud (Amazon VPC) feature allows you to create a dual-
    homed instance?

    1. Elastic IP address

    2. Customer gateways

    3. Security groups

    4. Elastic network interface

  9. How many Internet Protocol Security (IPsec) tunnels are available for a single Virtual
    Private Network (VPN) connection?

    1. 4

    2. 3

    3. 2

    4. 1

88 Chapter 3 Advanced Amazon Virtual Private Cloud (Amazon VPC)


Review Questions

  1. Which of the following is a security benefit of Amazon Virtual Private Cloud (Amazon
    VPC) endpoints?

    1. VPC endpoints provide private connectivity that increases performance to AWS Cloud
      services.

    2. VPC endpoints limit access to services from the Internet, reducing who can access the
      Application Programming Interfaces (APIs) and services that AWS provides.

    3. VPC endpoints provide greater availability and reliability than public endpoints, which
      increases security by limiting access for Distributed Denial of Service (DDoS) and other
      attacks.

    4. VPC endpoints provide private access, limiting the number of instances that require
      Internet access.

  2. You are configuring a secure access policy for your Amazon Simple Storage Service
    (Amazon S3) bucket. There is a Virtual Private Cloud (VPC) endpoint and Amazon S3
    bucket policy that restricts access to the VPC endpoint. When browsing the bucket through
    the AWS Management Console, you do not have access to the buckets. Which statement is
    NOT true?

    1. This is expected behavior.

    2. Your corporate web proxy may be blocking access to downloading objects.

    3. The objects are still available via the Amazon S3 VPC endpoint.

    4. You must specifically enable AWS Management Console access as part of the Amazon
      S3 VPC endpoint policy.

  3. You have created a centralized, shared service Virtual Private Cloud (VPC) for your
    organization. It uses VPC peering, and you have been asked to evaluate AWS
    PrivateLink to optimize connectivity. Which of the following design considerations are
    true? (Choose two.)

    1. Applications that require the source IP address will have access to the source IP
      through AWS PrivateLink.

    2. The scalability of VPC peering is higher for high-bandwidth applications. This allows
      for faster transfers and more spoke VPCs.

    3. AWS PrivateLink is only appropriate for solutions that originate requests to the services
      VPC. Services in the shared VPC cannot initiate connections to spoke VPCs.

    4. AWS PrivateLink supports more connected VPCs than VPC peering.

    5. AWS PrivateLink will increase the overall performance capability of the shared services
      by using a Network Load Balancer.

      Review Questions 89


  4. You have configured an AWS PrivateLink connection between your Virtual Private Cloud
    (VPC) and the VPC of a business partner’s hospital. The hospital has specialized applica-
    tions that staff developed in-house; some were developed more than 10 years ago. The hos-
    pital is trying to enable access to your private service but is having problems connecting to
    your service. Which of the following are possible solutions? (Choose two.)

    1. The hospital is sending traffic over User Datagram Protocol (UDP). They must find a
      way to send traffic over Transmission Control Protocol (TCP).

    2. The hospital application does not support Domain Name System (DNS). They can
      manually specify the IP address of the VPC endpoint.

    3. VPC endpoints are not supported by applications that do not support DNS names.

    4. It is possible that the hospital applications need to support the appropriate authentica-
      tion methods to use VPC endpoints in their VPC.

    5. Create an IP Security (IPsec) Virtual Private Network (VPN) through the VPC end-
      point to enable tunneling of all traffic types for better compatibility.

  5. You have configured a new Amazon Simple Storage Service (Amazon S3) endpoint in your
    Virtual Private Cloud (VPC). You have created a public Amazon S3 bucket so that you can
    test connectivity. You can download objects from your laptop but not from instances in the
    VPC. Which of the following could be the problem? (Choose two.)

    1. Domain Name System (DNS) was not enabled for the subnets, so you must enable DNS.

    2. There are not enough free IP addresses in your subnet, so you must choose a larger
      subnet or remove unused interfaces and IP addresses.

    3. The VPC endpoint is attached to a public subnet, and you must configure the endpoint
      for a private subnet.

    4. The route to the Amazon S3 prefix list is not in the routing table for the instance’s subnet.

  6. You have configured private subnets so that applications can download security updates.
    You have a Network Address Translation (NAT) instance in each Availability Zone as the
    default gateway to the Internet for each private subnet. You find that you cannot reach port
    8080 of a server on the Internet from any of your private subnets. Which are the most likely
    causes of the problem? (Choose two.)

    1. The inbound security group does not allow port 8080 outbound.

    2. The NAT instances are blocking traffic to port 8080.

    3. The NAT instances have run out of ports to NAT traffic.

    4. The inbound network Access Control List (ACL) blocks traffic to port 8080.

    5. The remote server is blocking access from your instances.

      90 Chapter 3 Advanced Amazon Virtual Private Cloud (Amazon VPC)


  7. You have created three Virtual Private Clouds (VPCs) named A, B, and C. VPC A is peered
    with VPC B. VPC B is peered with VPC C. Which statement is true about this peering
    arrangement?

    1. Instances in VPC A can reach instances in VPC C by default.

    2. Instances in VPC A can reach instances in VPC C if the correct routes are configured.

    3. Instances in VPC A can reach instances in VPC C if they use a proxy instance in
      VPC B.

    4. Instances in VPC A can reach instances in VPC C if they set their routes to an instance
      in VPC B.

  8. You have configured a consumer Virtual Private Cloud (VPC) endpoint for a remote
    authentication service hosted by a business partner using AWS PrivateLink. The endpoints
    have been whitelisted and configured on the consumer and provider. Some instances are not
    able to access the private authentication service. Which of the following could cause this
    issue? (Choose two.)

    1. The prefix list to the VPC endpoint is not configured in all subnets.

    2. The instances do not have enough network interfaces to connect to the provider
      endpoint.

    3. The instances are not using the correct Domain Name System (DNS) entry to reach the
      VPC endpoint.

    4. The outbound security group of the instances does not allow the authentication port.

    5. The route to the endpoint does not include all of the provider’s IP addresses.

  9. You are trying to create a new Virtual Private Cloud (VPC). You try to add a Classless
    Inter-Domain Routing (CIDR) range, but the additional CIDR range is not being applied.
    Which of the following could solve this issue? (Choose two.)

    1. Delete unused routes if you are at the maximum allowed routes.

    2. Delete unused subnets if you are at the maximum allowed subnets.

    3. Delete unused, additional VPCs if you are at the maximum allowed VPCs.

    4. Define a valid CIDR range based on the original VPC CIDR.

    5. The additional CIDR range is currently being used by another VPC.

  10. You have defined your original Virtual Private Cloud (VPC) Classless Inter-Domain
    Routing (CIDR) as 192.168.20.0/24. Your on-premises infrastructure is defined as
    192.168.128.0/17. You have configured a route to on-premises as 192.168.0.0/16 in your
    VPC route table. You have added a new CIDR range of 192.168.100.0/24 to your VPC.
    Users on-premises say that they can no longer reach the original 192.168.20.0/24 addresses.
    Which of the following is true?

    1. The route should be defined for 192.168.128.0/17 to allow more granular routing to
      on-premises devices.

    2. The new CIDR range should be contiguous to the existing VPC CIDR range.

    3. New CIDR ranges cannot be more specific than existing routes.

    4. This is a valid configuration, so the issue is not related to the CIDR configuration.

      Review Questions 91


  11. You run a hosted service on AWS. Each copy of your hosted service is in a separate Virtual
    Private Cloud (VPC) and is dedicated to a single customer. Your hosted service has thou-
    sands of customers. The services in the dedicated VPCs require access to central provision-
    ing and update services. Which connectivity methods can enable this architecture?
    (Choose two.)

    1. Use VPC peering between the dedicated VPCs and the central service.

    2. Reference security groups across VPCs but use Network Address Translation (NAT)
      Gateways for inter-VPC access.

    3. Use AWS PrivateLink to access central services from the dedicated VPCs.

    4. Make the central services public. Access the central services over the Internet using
      strong encryption and authentication.

    5. Create a VPN from the Virtual Private Gateway (VGW) in each hosted VPC to the
      VGW in the provisioning VPC.

  12. Your networking group has decided to migrate all of the 192.168.0.0/16 Virtual Private
    Cloud (VPC) instances to 10.0.0.0/16. Which of the following is a valid option?

    1. Add a new 10.0.0.0/16 Classless Inter-Domain Routing (CIDR) range to the
      192.168.0.0/16 VPC. Change the existing addresses of instances to the 10.0 0.0/16

      space.

    2. Change the initial VPC CIDR range to the 10.0.0.0/16 CIDR.

    3. Create a new 10.0.0.0/16 VPC. Use VPC peering to migrate workloads to the new
      VPC.

    4. Perform Network Address Translation (NAT) on everything in the 192.168 0.0/16

      space to the 10.0.0.0/16 space using NAT Gateways.

  13. Your organization has a single Virtual Private Cloud (VPC) for development workloads.
    An open source Virtual Private Network (VPN) running on an Amazon Elastic Compute
    Cloud (Amazon EC2) instance is configured to provide developers with remote access. The

    VPN instance gives users IP addresses from a Classless Inter-Domain Routing (CIDR) range
    outside the VPC and performs a source Network Address Translation (NAT) on received
    traffic to the private address of the instance. Your organization acquired a company that
    also uses AWS with their own VPC. You have configured VPC peering between the two
    VPCs and instances can communicate without issue. Which of the following flows will fail?

    1. An incoming connection from one user on the VPN to another user on the VPN.

    2. A virus scan from an instance in the acquired VPC to a user connected through VPN.

    3. An Application Programming Interface (API) request from a VPN user to an instance
      in the acquired VPC.

    4. A web request to the Internet from a user connected through VPN.

  14. Which of the following services can you access over AWS Direct Connect? (Choose two.)

    1. Interface Virtual Private Cloud (VPC) endpoints

    2. Gateway VPC endpoints

    3. Amazon Elastic Compute Cloud (Amazon EC2) instance metadata

    4. Network Load Balancer

      92 Chapter 3 Advanced Amazon Virtual Private Cloud (Amazon VPC)


  15. Some people in your company have created a very complicated and management-intensive
    workflow for automating development builds and testing. They have requested those
    involved in creating it not to repeat this workflow more than once. The security organiza-
    tion, however, wants every developer to have their own account to reduce the blast radius of
    development issues. What is the best design for providing access to the development system?

    1. Provide one large Virtual Private Cloud (VPC). Configure network Access Control
      Lists (ACLs) and security groups so that the blast radius for developers is limited.

    2. Ask the developers simply to automate the deployment of their build system and make
      it a distributed system. Deploy a copy of this in each developer VPC to prevent any
      blast radius or complexity problems.

    3. Deploy the development system in a central VPC. Allow developers to access the sys-
      tem through AWS PrivateLink.

    4. Deploy the development system in a central VPC. Extend network interfaces with
      cross-account permissions so that developers can route their code to the development
      system.

  16. An administrator was using an Elastic IP address to perform Application Programming
    Interface (API) calls with a business partner. The business partner whitelisted that IP in
    their firewalls. Unfortunately, the administrator ran a script that they did not understand
    which deleted the instance; the public address is no longer available. The administrator has
    submitted an API call to recall the Elastic IP address, but the address is not being returned.
    What could be the cause of this issue? (Choose two.)

    1. The IP was auto-assigned rather than an assigned Elastic IP address.

    2. The Elastic IP address was not tagged correctly for recall.

    3. The IP was never owned by the account.

    4. It is not possible to recall an Elastic IP address after it has been released.

    5. The associated instance has the maximum number of assigned Elastic IP addresses.

Exercises 125


  1. Navigate to the Amazon VPC dashboard in the AWS Management Console and create
    three VPCs: Spoke VPC1 and Spoke VPC2, and Hub VPC.

  2. Navigate to the Amazon EC2 dashboard and launch an Amazon Linux AMI in the Hub
    VPC. Choose c4.large as the Amazon EC2 instance type. Make sure that the instance
    is launched in a public subnet that has a route to the Internet gateway for Internet
    traffic. Associate an Elastic IP with the instance.

  3. Disable source destination check on the instance from the Amazon EC2 instance
    dashboard and make sure that IP forwarding is enabled at the operating system level.

  4. Use SSH to access the instance and install VPN software that supports BGP routing.
    You can install Quagga for actual BGP routing. You will perform the actual IPsec and
    BGP configuration in a later step.

  5. Configure the VPC route table so that traffic destined for the CIDR ranges of the two
    spoke VPCs is pointed toward the elastic network interface of the Amazon EC2 instance.

  6. Repeat the following steps for Spoke VPC1 and Spoke VPC2.

  7. Navigate to the Amazon VPC dashboard and create a new VGW.

  8. Attach the VGW to Spoke VPC1 or Spoke VPC2.

  9. In the Amazon VPC dashboard, navigate to Customer Gateways and create a new
    customer gateway. Enter the Elastic IP of the Amazon EC2 instance that you created
    earlier.

  10. In the Amazon VPC dashboard, navigate to VPN connections and create a new VPN
    connection. Provide the customer gateway, the routing type, and other required
    information.

  11. Download the configuration file from the AWS Management Console.

  12. Enable VGW route propagation in the VPC subnet route tables.

  13. Use SSH to access the Amazon EC2 instance and configure IPsec and BGP using the
    configuration files that you downloaded in the earlier step. Perform this configuration
    for both Spoke VPC1 and Spoke VPC2.

  14. Launch two Amazon EC2 instances in the spoke VPC subnets and test connectivity
    between them.

You have now successfully set up a transitive routing architecture leveraging VPN overlay
connectivity. This is verified in step 14. All traffic that is sent between VPC 1 and VPC 2
will now flow via the transit EC2 instances.


126 Chapter 4 Virtual Private Networks


Review Questions

  1. What are the two endpoints that can be used to terminate a Virtual Private Network (VPN)
    on AWS? (Choose two.)

    1. Amazon Elastic Compute Cloud (Amazon EC2) instance

    2. AWS VPN CloudHub

    3. Amazon Virtual Private Cloud (Amazon VPC)

    4. Internet gateway

    5. Virtual Private Gateway (VGW)

  2. How many Virtual Private Network (VPN) tunnels have to be established to use the out-of-
    the-box high availability of an AWS-managed VPN option?

    1. 1

    2. 2

    3. 3

    4. 4

  3. What are the routing mechanisms supported by a Virtual Private Gateway (VGW)?
    (Choose two.)

    1. Open Shortest Path First (OSPF)

    2. Border Gateway Protocol (BGP)

    3. Static Routes

    4. Routing Information Protocol Version 2 (RIPv2)

    5. Enhanced Interior Gateway Routing Protocol (EIGRP)

  4. Which of the following is the responsibility of AWS when you terminate a Virtual Private
    Network (VPN) on an Amazon Elastic Compute Cloud (Amazon EC2) instance?

    1. Configure Internet Protocol Security (IPsec) parameters.

    2. Manage redundancy and high availability of the VPN tunnel.

    3. Manage scaling of the VPN tunnel.

    4. Manage the underlying Amazon EC2 host health.

  5. Which of the following steps is necessary to ensure proper routing when terminating a
    Virtual Private Network (VPN) on an Amazon Elastic Compute Cloud (Amazon EC2)
    instance?

    1. Disable source destination check on the Amazon EC2 instance.

    2. Enable source destination check on the Amazon EC2 instance.

    3. Enable route propagation in a Virtual Private Cloud (VPC) subnet route table.

    4. Enable enhanced networking mode on the Amazon EC2 instance.

      Review Questions 127


  6. You are tasked with setting up a VPN connection to a VPC from your on-premises data
    center. You have to purchase a new VPN termination device to be used as customer gateway.
    You are leveraging a Virtual Private Gateway at the AWS end. Which of the following must
    be supported by the hardware you choose to deploy on-premises for setting up the VPN
    connection?

    1. BGP routing protocol

    2. 802.1Q encapsulation standard

    3. IPsec protocol

    4. GRE protocol

  7. When setting up a client-to-site Virtual Private Network (VPN) to access AWS resources,
    how can you achieve highest availability with least management overhead?

    1. Leverage the high availability built into Virtual Private Gateway (VGW).

    2. Configure client software to use a DNS name as a VPN termination endpoint. Map the
      DNS name to multiple IP addresses using Amazon Route 53 and set up health checks.

    3. Configure client software to use an EC2 elastic IP as the VPN termination endpoint.
      Build in automation to detect failure, and move Elastic IP from the primary to the
      secondary EC2 instance.

    4. Configure the client software to use an EC2 elastic IP as the VPN termination
      endpoint. Turn on EC2 auto-recovery on this instance.

  8. Which AWS Cloud service is used for a client-to-site Virtual Private Network (VPN)
    considering minimum management overhead?

    1. Virtual Private Gateway (VGW)

    2. Amazon Elastic Compute Cloud (Amazon EC2)

    3. AWS VPN CloudHub

    4. Virtual Private Cloud (VPC) private endpoint

  9. You are deploying an application on multiple Amazon Elastic Compute Cloud (Amazon
    EC2) instances. The application must be U.S. Health Insurance Portability and Account-
    ability Act (HIPAA) compliant and requires end-to-end encryption in motion. The applica-
    tion runs on Transmission Control Protocol (TCP) port 7128. What is the most effective
    way to deploy the application?

    1. Navigate to the Amazon EC2 instance’s properties and check the encryption box.

    2. Set up an Internet Protocol Security (IPsec) Virtual Private Network (VPN) between all
      Amazon EC2 instances in a mesh.

    3. Use Secure Sockets Layer (SSL) to encrypt traffic at the application layer.

    4. Enable encryption using an AWS KMS key for all Amazon EBS volumes.

  10. Which of the following parameters are automatically generated when you create a Virtual
    Private Network (VPN) connection to a Virtual Private Gateway (VGW)?

    1. VGW public IP

    2. VGW Border Gateway Protocol (BGP) Autonomous System Number (ASN)

    3. Inside tunnel IP addresses

    4. Internet Protocol Security (IPsec) Pre-Shared Key (PSK)

Review Questions 153


Review Questions

  1. A private Virtual Interface (VIF) on AWS Direct Connect attaches to your Virtual Private
    Cloud (VPC) using which of the following?

    1. Internet gateway

    2. VPC endpoint

    3. Virtual Private Gateway (VGW)

    4. Peering connection

  2. Which routing protocol is supported by AWS Direct Connect Virtual Interfaces (VIFs)?

    1. Border Gateway Protocol (BGP)

    2. Routing Information Protocol (RIP)

    3. Open Shortest Path First (OSPF)

    4. Intermediate System to Intermediate System (IS-IS)

  3. What is the minimum number of connections supported in a Link Aggregation Group
    (LAG)?

    1. 4

    2. 3

    3. 2

    4. 1

  4. Which of the following is a type of Virtual Interface (VIF) that is supported on AWS Direct
    Connect?

    1. Global

    2. Virtual Private Network (VPN)

    3. Local

    4. Public

  5. A resilient AWS Direct Connect connection requires you to connect at what number of
    AWS Direct Connect locations?

    1. 1

    2. 2

    3. 3

    4. 4

  6. How many prefixes can be announced from a customer to AWS over an AWS Direct
    Connect Private Virtual Interface (VIF)?

    1. 10

    2. 50

      C. 100

      D. 1,000

      154 Chapter 5 AWS Direct Connect


  7. When using a Link Aggregation Group (LAG) composed of two AWS Direct Connect
    connections, how many IPv4 Border Gateway Protocol (BGP) sessions are required per
    Virtual Interface (VIF)?

    1. 1

    2. 2

    3. 3

    4. 4

  8. Which of the following has the highest route priority in the Border Gateway Protocol (BGP)
    path selection algorithm used by AWS?

    1. Static routes

    2. Local routes to the Virtual Private Cloud (VPC)

    3. Shortest AS path

    4. BGP routes from a Virtual Private Network (VPN)

  9. Hosted Virtual Interfaces (VIFs) on AWS Direct Connect describe which of the following
    scenarios?

    1. A partner providing a new connection on their interconnect to a customer

    2. A customer providing a Virtual Interface (VIF) to another customer on their
      connection

    3. A partner providing a VIF on their interconnect to a customer

    4. A customer providing a new connection on their connection to another customer

  10. All billing for AWS Direct Connect ceases when which of the following occurs?

    1. The last Virtual Interface (VIF) on a connection is deleted.

    2. The port on the customer equipment is disabled.

    3. The cross-connect is removed.

    4. The connection is deleted.

Review Questions 205


Review Questions

  1. What are the two types of Amazon Route 53 hosted zones? (Choose two.)

    1. Public hosted zones

    2. Global hosted zones

    3. NULL hosted zones

    4. Routed hosted zones

    5. Private hosted zones

  2. Amazon Route 53 cannot route queries to which AWS resources?

    1. Amazon CloudFront distribution

    2. Elastic Load Balancing load balancer

    3. Amazon Elastic Compute Cloud (Amazon EC2) instance

    4. AWS CloudFormation

  3. To stop sending traffic to resources with weighted routing for Amazon Route 53, you must
    do which one of the following?

    1. Delete the resource record.

    2. Change the resource record weight to 100.

    3. Change the resource record weight to 0.

    4. Switch to a multivalue answer resource record.

  4. If you do not associate a health check with an Amazon Route 53 multivalue answer record,
    which of the following occurs?

    1. Amazon Route 53 always considers the record to be healthy.

    2. Amazon Route 53 always considers the record to be unhealthy.

    3. Amazon Route 53 will give you an error.

    4. You must use a Text (TXT) record instead.

  5. How do you access traffic flow for Amazon Route 53?

    1. Using the AWS Command Line Interface (CLI)

    2. Through an Amazon Elastic Compute Cloud (Amazon EC2) instance inside your
      Amazon Virtual Private Cloud (Amazon VPC)

    3. Through AWS Direct Connect

    4. Using the AWS Management Console

  6. What should you use if you want Amazon Route 53 to respond to Domain Name System
    (DNS) queries with up to eight healthy records selected at random?

    1. Geolocation routing policy

    2. Simple routing policy

    3. Alias record

    4. Multivalue answer routing policy

      206 Chapter 6 Domain Name System and Load Balancing


  7. Why is referencing the Application Load Balancer or Classic Load Balancer by its DNS
    CNAME recommended?

    1. IP addresses may change as the load balancers scale.

    2. DNS CNAMEs provide a lower latency than IP addresses.

    3. You want to preserve the source IP of the client.

    4. IP addresses are public and open to the Internet.

  8. With the enableDnsHostname attribute set to true, Amazon will do which of the following?

    1. Enable Domain Name System (DNS) resolution for your Amazon Virtual Private Cloud
      (Amazon VPC).

    2. Auto-assign DNS hostnames to Amazon Elastic Compute Cloud (Amazon EC2)
      instances.

    3. Assign internal-only DNS hostnames to Amazon EC2 instances.

    4. Allow for the manual configuration of hostnames to Amazon EC2 instances.

  9. You have the enableDnsHostname attribute set to true for your VPC. Your Amazon Elastic
    Compute Cloud (Amazon EC2) instances are not receiving DNS hostnames, however. What
    could be the potential cause?

    1. DNS resolution is not supported over VPC peering.

    2. You need to configure your Amazon Route 53 private hosted zone.

    3. Amazon does not assign DNS hostnames to instances.

    4. enableDnsSupport is not set to true.

  10. You are assessing load balancer options for your AWS deployment. You want support for
    static IP addresses for the load balancer. What would be the best choice of Elastic Load
    Balancing load balancer for this purpose?

    1. Amazon Route 53

    2. Network Load Balancer

    3. Application Load Balancer

    4. Classic Load Balancer

230 Chapter 7 Amazon CloudFront


Review Questions

  1. What is a Content Delivery Network (CDN)?

    1. A managed Domain Name System (DNS) service

    2. A type of load balancer

    3. A distributed network of caches

    4. A protocol for the distribution of traffic over the web

  2. You are using Amazon CloudFront for your website. A user requests content, which is
    routed to a local edge location. What happens before the requested content is available at
    that edge location?

    1. Amazon CloudFront will respond with an HTTP 404 error.

    2. Amazon CloudFront will not send users to edge locations that do not contain the
      requested data.

    3. Amazon CloudFront always pre-positions content in edge locations so that users never
      experience a cache miss.

    4. The edge location sends a request to the origin server, serves the user the content, and
      then stores the content.

  3. Amazon CloudFront can work with which of the following origin servers? (Choose three.)

    1. Amazon Simple Storage Service (Amazon S3)

    2. Elastic Load Balancing

    3. On-premises servers

    4. An Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling group

    5. A Virtual Private Cloud (VPC) route table

  4. What is the default expiry time for an Amazon CloudFront cache?

    1. 300 seconds

    2. 24 hours

    3. 12 months

    4. Objects never expire by default.

  5. What does the Amazon CloudFront invalidation feature do?

    1. Blocks users from flooding edge locations with requests.

    2. Removes duplicate objects from the origin server.

    3. Allows the override of origin server encryption.

    4. Removes objects from the CloudFront cache.

  6. What does an Amazon CloudFront cache behavior do?

    1. Controls how requests are cached.

    2. Applies rules to control selection of origins.

      Review Questions 231


    3. Enforces HTTPS encryption for all users.

    4. Allows dynamic content caching.

  7. What does Amazon CloudFront do when it uses HTTP Live Streaming (HLS), HTTP
    Dynamic Streaming (HDS), Smooth Streaming, and MPEG DASH formats for streaming
    video?

    1. Uses the native Amazon CloudFront media player for improved performance.

    2. Uses multiple edge locations for improved performance.

    3. Sends parallel streams for improved performance.

    4. Encapsulates video into pull (rather than push) formats that allow clients to adapt to
      changing conditions for improved performance.

  8. When adding an alternate domain to your Amazon CloudFront distribution, the wildcard *

    can be used to do what?

    1. Replace part of a subdomain name (for example, subdomain.*.example.com).

    2. Replace part of a subdomain name (for example, *domain.example.com).

    3. Act in the place of specifying subdomains individually.

    4. Reference multiple files on your origin server.

  9. When using AWS Certification Manager (ACM) and Amazon CloudFront, you configured
    your certificate within ACM. When you try to enable Amazon CloudFront, however, you
    do not see the certificate available for use. What could be the problem?

    1. ACM does not support Amazon CloudFront.

    2. You need to purchase a certificate from a third-party Certificate Authority (CA) and
      upload it to ACM.

    3. You need to configure the preshared key for ACM.

    4. You might not have created the ACM certificate in the right region.

  10. How can you use the wildcard * when invalidating objects with Amazon CloudFront?

    1. In place of specifying subdomains individually.

    2. As a form of object versioning.

    3. To allow access to your origin server.

    4. To specify a path that applies to many objects.

  11. What do Amazon CloudFront access logs do?

    1. They are a way to monitor performance of your Amazon Simple Storage Service (Ama-
      zon S3) bucket.

    2. They contain detailed information about every user request that Amazon CloudFront
      receives.

    3. They enable you to capture information about the IP traffic going to and from network
      interfaces.

    4. They enable governance, compliance, operational auditing, and risk auditing of your
      AWS account.

Review Questions 269


Review Questions

  1. Which of the following allows you to create new AWS accounts programmatically?

    1. AWS Identity and Access Management (IAM)

    2. AWS Organizations

    3. Amazon Simple Storage Service (Amazon S3)

    4. AWS CloudTrail

  2. AWS CloudFormation allows you to define your infrastructure as code in what artifact?

    1. JSON

    2. StackSets

    3. Stacks

    4. Templates

  3. Which of the following is a security benefit of services such as AWS Service Catalog?
    (Choose two.)

    1. Automation

    2. Repeatability

    3. Self-service

    4. Curation

    5. AWS Marketplace Integration

  4. Amazon Route 53 uses several methods to deliver a 100 percent availability Service Level
    Agreement (SLA). Which method guards against failures of Top Level Domain (TLD)
    servers?

    1. Shuffle sharding

    2. Routing policies

    3. Anycast striping

    4. Latency routing

  5. Which of the following allows you to restrict access to your Amazon Simple Storage Service
    (Amazon S3) bucket to Amazon CloudFront distributions that you control?

    1. Custom HTTP header

    2. Origin Access Identity (OAI)

    3. AWS Lambda@Edge

    4. Preshared keys

      270 Chapter 8 Network Security


  6. Private keys in AWS Certificate Manager are protected using which one of the following?

    1. AWS CloudHSM

    2. AWS Key Management Service

    3. Client-side encryption

    4. Amazon S3 server-side encryption

  7. AWS WAF integrates with which one of the following AWS resources?

    1. Amazon Simple Storage Service (Amazon S3)

    2. Amazon DynamoDB

    3. Amazon CloudFront

    4. Amazon Route 53

  8. AWS Shield Standard provides protection at which layers of the Open Systems Interconnec-
    tion (OSI) model? (Choose two.)

    1. Physical (Layer 1)

    2. Data Link (Layer 2)

    3. Network (Layer 3)

    4. Transport (Layer 4)

    5. Application (Layer 7)

  9. Which Amazon Virtual Private Cloud (Amazon VPC) feature allows you to access AWS
    Cloud services without the use of an Internet gateway?

    1. VPC endpoints

    2. VPC peering

    3. Customer-hosted endpoints

    4. Network Address Translation (NAT) gateway

  10. What aspect of an Amazon Virtual Private Cloud (Amazon VPC) is stateful?

    1. Network Access Control Lists (ACLs)

    2. Security groups

    3. Amazon VPC Flow Logs

    4. Prefix list

  11. Which AWS Cloud service will help you identify sensitive account data, like access and
    secret keys, stored in an Amazon Simple Storage Service (Amazon S3) bucket?

    1. Amazon Inspector

    2. AWS Config

    3. AWS CloudTrail

    4. Amazon Macie

      Review Questions 271


  12. You are tasked with identifying unused security groups and ports in a Virtual Private Cloud
    (VPC). Which AWS capabilities should you use?

    1. Amazon CloudWatch metrics

    2. AWS CloudTrail

    3. AWS Config

    4. VPC Flow Logs

  13. To protect its website, the organization directs you to implement known-attacker protection
    for the website. The website resides behind an Application Load Balancer. You have sub-
    scribed to a threat intelligence service that posts hourly IP reputation lists. What combination
    of AWS Cloud services will allow you to block traffic based on this threat intelligence?

    1. Amazon CloudWatch, AWS Lambda, AWS WAF

    2. Amazon CloudFront, AWS Lambda, AWS WAF

    3. AWS CloudTrail, AWS Lambda, AWS Config

    4. AWS CloudTrail, Amazon CloudWatch, AWS Lambda

Review Questions 299


Review Questions

  1. In order to decrease the number of instances that have inbound web access, your team has
    recently placed a Network Address Translation (NAT) instance on Amazon Linux in the
    public subnet. The private subnet has a 0.0.0.0/0 route to the elastic network interface of
    the NAT instance. Users are complaining that web responses are slower than normal. What
    are practical steps to fix this issue? (Choose two.)

    1. Replace the NAT instance with a NAT gateway.

    2. Enable enhanced networking on the NAT instance.

    3. Create another NAT instance and add another 0.0.0.0/0 route in the private subnet.

    4. Try a larger instance type for the NAT instance.

  2. Voice calls to international numbers from inside your company must go through an open-
    source Session Border Controller (SBC) installed on a custom Linux Amazon Machine
    Image (AMI) in your Virtual Private Cloud (VPC) public subnet. The SBC handles the real-
    time media and voice signaling. International calls often have garbled voice, and it is diffi-
    cult to understand what people are saying. What may increase the

    quality of international voice calls?

    1. Place the SBC in a placement group to reduce latency.

    2. Add additional network interfaces to the instance.

    3. Use an Application Load Balancer to distribute load to multiple SBCs.

    4. Enable enhanced networking on the instance.

  3. Your big data team is trying to determine why their proof of concept is running slowly.
    For the demo, they are trying to ingest 1 TB of data from Amazon Simple Storage Service
    (Amazon S3) on their c4.8xl instance. They have already enabled enhanced networking.
    What should they do to increase Amazon S3 ingest rates?

    1. Run the demo on-premises and access Amazon S3 from AWS Direct Connect to reduce
      latency.

    2. Split the data ingest on more than one instance, such as two c4.4xl instances.

    3. Place the instance in a placement group and use an Amazon S3 endpoint.

    4. Place a Network Load Balancer between the instance and Amazon S3 for more effi-
      cient load balancing and better performance.

  4. Your database instance running on an r4.large instance seems to be dropping Transmission
    Control Protocol (TCP) packets based on a packet capture from a host with which it was
    communicating. During initial performance baseline tests, the instance was able to handle
    peak load twice as high as its current load. What could be the issue? (Choose two.)

    1. The r4.large instance may have accumulated network credits before load testing, which
      would allow higher peak values.

    2. There may be additional database processing errors causing connection timeouts.

    3. The read replica database should be placed in a separate Availability Zone.

    4. The Virtual Private Network (VPN) session should be configured for dynamic Border
      Gateway Protocol (BGP) routing for higher availability.

      300 Chapter 9 Network Performance


  5. Your development team is testing the performance of a new application using enhanced
    networking. They have updated the kernel to the latest version that supports the Elastic
    Network Adapter (ENA) driver. What are the other two requirements for support?
    (Choose two.)

    1. Use an instance that supports the ENA driver.

    2. Support the Intel Virtual Function driver in addition to the ENA driver.

    3. Flag the Amazon Machine Image (AMI) for enhanced networking support.

    4. Enable enhanced networking on the elastic network interface.

  6. The new architecture for your application involves replicating your stateful application
    data from your Virtual Private Cloud (VPC) in US East (Ohio) to Asia Pacific (Tokyo). The
    replication instances are in public subnets in each region and communicate with public
    addresses over Transport Layer Security (TLS). Your team is seeing much lower replica-
    tion throughput than they see within a single VPC. Which steps can you take to improve
    throughput?

    1. Increase the application’s packets per second.

    2. Configure the Maximum Transmission Unit (MTU) to 9,001 bytes on each instance’s
      eth0 to support jumbo frames.

    3. Create a Virtual Private Network (VPN) connection between the regions and enable
      jumbo frames on each instance.

    4. None of the above

  7. Which networking feature will provide the most benefits to support a clustered computing
    application that requires very low latency and high network throughput?

    1. Enhanced networking

    2. Network Input/Output (I/O) credits

    3. Placement groups

    4. Amazon Route 53 performance groups

  8. What would you recommend to make a scalable architecture for performing very high
    throughput data transfers?

    1. Use enhanced networking.

    2. Configure the Amazon Virtual Private Cloud (Amazon VPC) routing table to have a
      single hop between every instance in the VPC.

    3. Distribute the flows across many instances.

    4. Advertise routes to external networks with Border Gateway Protocol (BGP) to increase
      routing scale.

  9. One of the applications that you want to migrate to AWS has high disk performance
    requirements. You need to guarantee certain baseline performance with low latency. Which
    feature can help meet the performance requirements of this application?

    1. Amazon Elastic Block Store (Amazon EBS) Provisioned Input/Output Per Second
      (IOPS)

    2. Amazon Elastic File System (Amazon EFS)

      Review Questions 301


    3. Dedicated network bandwidth

    4. Quality of Service (QoS)

  10. Your application developers are facing a challenge relating to network performance. Their
    application creates a buffer to accept network data so that it can be analyzed and displayed in
    real time. It seems that packets have delays of between 2 milliseconds and 120 milliseconds,
    however. Which network characteristic do you need to improve?

    1. Bandwidth

    2. Latency

    3. Jitter

    4. Maximum Transmission Unit (MTU)

  11. The operations group at your company has migrated one of your application components
    from R3 instances to R4 instances. The networking performance is not as high as expected,
    however. What could be this issue? (Choose two.)

    1. Instance routes have become more specific, creating network latency.

    2. The operating system does not have the ixgbevf module installed.

    3. The instance type does not support the Elastic Network Adapter (ENA) driver.

    4. The instance or Amazon Machine Image (AMI) is no longer flagged for enhanced
      networking.

  12. Your application is having a slower than expected transfer rate between application tiers.
    What is the best option for increasing throughput?

    1. Use a single Network Load Balancer in front of each instance.

    2. Enable Quality of Service (QoS).

    3. Reduce the jitter in the network.

    4. Increase the Maximum Transmission Unit (MTU).

  13. Your company has an application that it would like to share with a business partner, but the
    performance of the application is business-critical. The network architects are discussing
    using AWS Direct Connect to increase performance. Which of the following are performance
    advantages of AWS Direct Connect compared to a Virtual Private Network (VPN) or Internet
    connectivity? (Choose three.)

    1. Lower latency

    2. Ability to use jumbo frames

    3. Ability to configure Quality of Service (QoS) on the AWS Direct Connect provider’s
      circuits

    4. Lower egress costs

    5. Ability to perform detailed monitoring of the AWS Direct Connect connections

      302 Chapter 9 Network Performance


  14. What information is most efficient to determine whether a workload is CPU bound, band-
    width bound, or packets per second bound? (Choose four.)

    1. Amazon CloudWatch CPU metrics

    2. Packet captures

    3. Elastic network interface count

    4. Amazon CloudWatch network bytes metrics

    5. Amazon CloudWatch packets per second metrics

    6. Kernel version

    7. Host CPU information

  15. Your organization is planning on connecting to AWS. The organization has decided to use
    a specific Virtual Private Network (VPN) technology for the first phase of the project. You
    are tasked with implementing the VPN server in a Virtual Private Cloud (VPC) and
    optimizing it for performance. What are important considerations for Amazon Elastic
    Compute Cloud (Amazon EC2) VPN performance? (Choose two.)

    1. The VPN instance should support enhanced networking.

    2. Because all VPN connections use the Virtual Private Gateway (VGW), it’s important to
      scale the VGW horizontally.

    3. IP Security (IPsec) VPNs should use a Network Load Balancer to create a more scal-
      able VPN service.

    4. Investigate packet per second limitations and bandwidth limitations.

  16. Your research and development organization has created a mission-critical application that
    requires low latency and high bandwidth. The application needs to support AWS best prac-
    tices for high availability. Which of the following is not a best practice for this application?

    1. Deploy the application behind a Network Load Balancer for scale and availability.

    2. Use a placement group for the application to guarantee the lowest latency possible.

    3. Enable enhanced networking on all instances.

    4. Deploy the application across multiple Availability Zones.

  17. Your security department has mandated that all traffic leaving a Virtual Private Cloud
    (VPC) must go through a specialized security appliance. This security appliance runs on
    a bespoke operating system that users cannot access. What considerations are the most
    important for this operating system performance on AWS? (Choose two.)

    1. Driver support for the Intel Virtual Function and Elastic Network Adapter (ENA)

    2. Support for Amazon Linux

    3. Instance family and size support

    4. Domain Name System (DNS) resolution speed

      Review Questions 303


  18. Your company has deployed a bursty web application to AWS and would like to improve
    the user experience. It is important for only the web host to have the private key for Trans-
    port Layer Security (TLS), so the Classic Load Balancer has a listener on Transmission
    Control Protocol (TCP) port 443. What are some approaches that you can use to reduce
    latency and improve the scale-out process for the application?

    1. Use an Application Load Balancer in front of the application, enabling better utiliza-
      tion of multiple target groups with different HTTP paths and hosts.

    2. Configure enhanced networking on the Classic Load Balancer for lower latency load
      balancing.

    3. Use Amazon Certificate Manager (ACM) to distribute new certificates to Amazon
      CloudFront to accomplish handling content at the edge.

    4. Use a Network Load Balancer in front of your application to increase network
      performance.

  19. You are in charge of creating a network architecture for a development group that is
    interested in running a real-time exchange on AWS. The participants of the exchange
    expect very low latency but do not operate on AWS. Which description most accurately
    describes the networking and security tradeoffs for potential network designs?

    1. Use AWS Direct Connect to connect to the exchange application. This allows for
      lower latency and native encryption but requires additional configuration to support
      multi-tenancy and agreements from participants.

    2. Configure a separate Virtual Private Network (VPN) connection on the Virtual Private
      Gateway (VGW) for each participant. This will allow individual scaling per participant
      and the lowest latency but requires customers to support VPN devices.

    3. Use AWS Direct Connect to connect to the exchange application. This allows for more
      control of the latency, but it requires organizing connectivity to each of the participants
      and provides no security guarantees.

    4. Allow participants to connect directly via the Internet. This allows for customers to
      come in freely but does not guarantee security. Latency can be managed with Trans-
      mission Control Protocol (TCP) tuning and network performance appliances.

  20. Which statement about Maximum Transmission Units (MTUs) on AWS is true?

    1. MTUs define the maximum throughput on AWS.

    2. You must configure a Virtual Private Cloud (VPC) to support jumbo frames.

    3. You must configure a placement group to support jumbo frames.

    4. Increasing the MTU is most beneficial for applications limited by packets per second.

  21. What is the advantage of the Data Plane Development Kit (DPDK) over enhanced networking?

    1. DPDK decreases the overhead of Hypervisor networking.

    2. Enhanced networking only increases bursting capacity, whereas DPDK increases
      steady-state performance.

    3. DPDK decreases operating system overhead for networking.

    4. DPDK allows deeper access to AWS infrastructure to enable new networking features
      that enhanced networking does not provide.

      304 Chapter 9 Network Performance


  22. What is the optimal performance configuration to enable high-performance networking for
    an Amazon Elastic Compute Cloud (Amazon EC2) instance operating as a firewall?

    1. One elastic network interface for all traffic.

    2. One elastic network interface for management traffic and one elastic network interface
      for each subnet the firewall operates in.

    3. Configure as many elastic network interfaces as possible and use operating system
      routing to split traffic over all interfaces.

    4. None of the above.

  23. Your team uses an application to receive information quickly from other parts of your
    infrastructure. It leverages low-latency multicast feeds to receive information from other
    applications and displays analysis. Which approach could help satisfy the application’s low
    latency requirements in AWS?

    1. Maintain the same multicast groups in AWS because the application will work in a
      Virtual Private Cloud (VPC).

    2. Work with the application owners to find another delivery system such as a message
      queue or broker. Place the applications in a placement group for low latency.

    3. Move the multicast application to AWS and enable enhanced networking. Configure
      the other applications to send their multicast feed to the application over AWS Direct
      Connect.

    4. Use the VPC routing table to route 224.0.0.0/8 traffic to the instance elastic network
      interface. Enable enhanced networking and jumbo frames for low latency and high
      throughput.

  24. What is bandwidth?

    1. Bandwidth is the number of bits that an instance can store in memory over a network.

    2. Bandwidth is the amount of data transferred from one point in the network to another
      point.

    3. Bandwidth is a measurement of the largest capacity of handling network traffic in any
      given path in a network.

    4. Bandwidth is the maximum data transfer rate at any point in the network.

  25. Why does User Datagram Protocol (UDP) react to performance characteristics differently
    than Transmission Control Protocol (TCP)?

    1. UDP requires more packet overhead than TCP.

    2. UDP supports less resilient applications.

    3. UDP is not a stateful protocol, so it reacts differently to latency and jitter.

    4. UDP lacks traffic congestion awareness.

Review Questions 341


Review Questions

  1. In an AWS CloudFormation template, you attempt to create a Virtual Private Cloud (VPC)
    with a Classless Inter-Domain Routing (CIDR) range of 10.0.0.0/16 and a subnet within
    the VPC with a CIDR range of 10.1.0.0/24. What happens when you initiate a
    CreateStack operation with this template?

    1. AWS CloudFormation detects the conflict and returns an error immediately.

    2. AWS CloudFormation attempts to create the subnet. When this fails, it skips this step
      and creates the remaining resources.

    3. AWS CloudFormation attempts to create the subnet. When this fails, it rolls back all
      other resources.

    4. AWS CloudFormation attempts to create the subnet. When this fails, it calls a custom
      resource handler to handle the error.

  2. You have created a large AWS CloudFormation template so that users in your company can
    create a Virtual Private Cloud (VPC) with a Virtual Private Network (VPN) connection
    back to the company’s on-premises network. This template sometimes fails, with an error
    message about routes not being able to use the Virtual Private Gateway (VGW) because it is
    not attached to the VPC. What is the best way to solve this issue?

    1. Add a DependsOn attribute to the route resource and make it depend on the gateway
      attachment resource.

    2. Reorder the resources in the template so that the route resource comes after the VGW.

    3. Use a custom resource to create the route. In the code for the custom resource, have the
      code sleep for two minutes to allow the VGW time to attach to the VPC.

    4. Add a DependsOn attribute to the gateway attachment resource and make it depend on
      the route resource.

  3. When an AWS CloudFormation stack is deleted, what happens to the resources it created?

    1. They are deleted unless their aws:cloudformation:stack-id tag has been removed.

    2. They are retained unless they have a DeletionPolicy attribute set to Delete.

    3. They are deleted unless AWS CloudFormation detects whether they are still in use.

    4. They are deleted unless they have a DeletionPolicy attribute set to Retain.

  4. You are building an AWS CloudFormation template that will be deployed using a con-
    tinuous delivery model. Which of the following sources can AWS CodePipeline monitor
    directly? (Choose two.)

    1. AWS CodeCommit

    2. A Git repository on an Amazon Elastic Compute Cloud (Amazon EC2) instance

    3. An on-premises GitHub Enterprise repository

    4. A Git repository in Amazon Elastic File System (Amazon EFS)

    5. Amazon Simple Storage Service (Amazon S3)

      342 Chapter 10 Automation


  5. What tool or service is needed to aggregate log files from multiple routing appliances run-
    ning on Amazon Elastic Compute Cloud (Amazon EC2) instances?

    1. AWS Lambda

    2. Amazon Inspector agent

    3. Amazon CloudWatch Logs agent

    4. AWS Shield

  6. You are creating a pipeline in AWS CodePipeline that will deploy to an AWS CloudForma-
    tion test stack. If the deployment is successful, then AWS CodePipeline will deploy a pro-
    duction stack. The Virtual Private Cloud (VPC) Classless Inter-Domain Routing (CIDR)
    ranges used by the two stacks are different. What is the best way to proceed?

    1. Create two templates, test.yml and prod.yml, containing different CIDR ranges.

    2. Use a custom resource for creating the VPC that configures the VPCs appropriately.

    3. Use an AWS CloudFormation intrinsic function that detects which stack it is deploying
      to and sets the value accordingly.

    4. Create a single template with parameters. Create two parameter files, test.json and

      prod.json, containing different CIDR ranges.

  7. Your organization requires human review of changes to a production AWS CloudFormation
    stack. A recent change to a Virtual Private Cloud (VPC) caused an outage when the changes
    unexpectedly deleted a subnet. What is the best way to prevent a similar occurrence in the
    future?

    1. Use the AWS CloudFormation ValidateTemplate Application Programming Interface
      (API) to verify the correctness of the template.

    2. Add an approval action to AWS CloudFormation that displays the pending changes
      and waits for approval.

    3. Create a change set in AWS CloudFormation for review. If the changes are approved,
      then execute the change set.

    4. Create a change set in AWS CloudFormation for review. If the changes are approved,
      then deploy the new template.

  8. You are starting a new networking deployment that will leverage the infrastructure as code
    model. What is the best way to track and visualize changes to the source code?

    1. Create a Git repository using GitHub.

    2. Set up an Amazon Simple Storage Service (Amazon S3) bucket with versioning enabled
      as a repository.

    3. Record changes using AWS CloudFormation change sets.

    4. Use AWS CodePipeline stages to track code state.

      Review Questions 343

  9. You have an AWS CloudFormation stack that contains a Virtual Private Cloud (VPC) with
    a Classless Inter-Domain Routing (CIDR) range of 10.0.0.0/16. You change the template to
    add two subnets to the VPC, SubnetA and SubnetB, both with CIDR ranges of 10.0 0.0/24.

    What happens when you update the stack?

    1. AWS CloudFormation detects the error and does not perform any actions.

    2. AWS CloudFormation creates SubnetA and then attempts to create SubnetB; when this
      fails, it stops.

    3. AWS CloudFormation creates SubnetA and SubnetB in an indeterminate order; when
      one fails, it stops.

    4. AWS CloudFormation creates SubnetA and SubnetB in an indeterminate order; when
      one fails, it rolls back both subnets.

  10. An AWS CloudFormation stack contains a subnet that is critical to your infrastructure
    and should never be deleted, even if the stack is updated with a template that requires this.
    What is the best way to protect the subnet in this situation?

    1. Add a stack policy that denies the Update:Delete and Update:Replace actions on
      this resource.

    2. Use an AWS Identity and Access Management (IAM) service role that prohibits calls to

      ec2:DeleteSubnet.

    3. Add a DeletionPolicy property to the subnet resource with a value of Retain.

    4. Delete the aws:cloudformation tags attached to the subnet.

360 Chapter 11 Service Requirements


Review Questions

  1. Which AWS Cloud service provides end-user connectivity to applications running within a
    Virtual Private Cloud (VPC)? (Choose two.)

    1. Remote Desktop Protocol

    2. PCoIP

    3. Amazon AppStream 2.0

    4. Amazon WorkSpaces

  2. How many network adapters are attached to a WorkSpace instance?

    1. 1

    2. 2

    3. 3

    4. 4

  3. How can AWS Lambda connect to the Internet when running in a Virtual Private Cloud
    (VPC)? (Choose two.)

    1. Internet gateway

    2. NAT Instance

    3. NAT gateway

    4. Public IP

  4. Amazon EMR requires which of the following? (Choose three.)

    1. DNS hostnames enabled on a VPC

    2. Private IP addresses

    3. Internet connectivity

    4. Amazon S3 connectivity

  5. What AWS Cloud service allows for serverless code execution?

    1. Amazon EC2

    2. Amazon RDS

    3. Amazon EMR

    4. AWS Lambda

  6. How can users reach the Internet through Amazon WorkSpaces? (Choose two.)

    1. No action is required; this is enabled by default.

    2. Through a public IP address assigned to each instance with an Internet gateway
      attached to the VPC

    3. Through a NAT gateway

    4. Specify Internet connectivity in the WorkSpace configuration.

      Review Questions 361


  7. Which service provides managed database instances?

    1. Amazon ECS

    2. Amazon RDS

    3. AWS Lambda

    4. Amazon SQS

  8. What is required for Amazon RDS high availability?

    1. Multi-AZ deployment with two subnets

    2. Amazon RDS snapshots

    3. Multi-AZ deployment with one subnet

    4. High availability is provided by default

  9. Which service will automatically provision and scale an application infrastructure with a
    user only needing to provide application code?

    1. Amazon ECS

    2. Elastic Load Balancing

    3. AWS Elastic Beanstalk

    4. AWS CloudFormation

  10. A developer wants to create a simple application to run on AWS using AWS Elastic
    Beanstalk. What must the network administrator set up?

    1. Load balancers

    2. Amazon EC2

    3. Security groups

    4. None of the above

  11. An application developer wants to replicate data automatically between an on-premises
    database and Amazon RDS asynchronously between different database engines. What steps
    will allow this? (Choose two.)

    1. Create an AWS DMS instance.

    2. Allow access to the database server on-premises from with a VPC.

    3. Open all database servers up for Internet connectivity.

    4. Create a security group to allow connectivity between the Amazon RDS and the on-
      premises databases.

  12. Your team is going to provision a 10-node Amazon Redshift cluster. How many IP
    addresses should be available in the subnet?

    1. 9

    2. 10

    3. 11

    4. 12

      362 Chapter 11 Service Requirements


  13. Your team has created a Multi-AZ Amazon RDS instance. The front-end application tier
    connects to the database through a custom DNS A record. After the primary database fails,
    the front-end application server can no longer reach the database. What change needs to be
    made to ensure availability in the event of a failover?

    1. The A name needs to be updated.

    2. The primary Amazon RDS instance needs to be restored.

    3. The application needs to use the IP address of the secondary Amazon RDS instance.

    4. The application needs to use the Amazon RDS hostname to connect to the database.

394 Chapter 12 Hybrid Architectures


Review Questions

  1. You have an on-premises application that requires access to Amazon Simple Storage Service
    (Amazon S3) storage. How do you enable this connectivity while designing for high-band-
    width access with low jitter, high availability, and high scalability?

    1. Set up an AWS Direct Connect public Virtual Interface (VIF).

    2. Set up public Internet access to Amazon Simple Storage Service (Amazon S3).

    3. Set up an AWS Direct Connect private VIF.

    4. Set up an IP Security (IPsec) Virtual Private Network (VPN) to a Virtual Private
      Gateway (VGW).

  2. You have two Virtual Private Clouds (VPCs) set up in AWS for different projects. AWS
    Direct Connect has been set up for hybrid IT connectivity. Your security team requires that
    all traffic going to these VPCs be inspected using a Layer 7 Intrusion Prevention System
    (IPS)/Intrusion Detection System (IDS). How will you architect this while considering cost
    optimization, scalability, and high availability?

    1. Set up a transit VPC architecture with a pair of Amazon Elastic Compute Cloud
      (Amazon EC2) instances acting as a transit point for all traffic. These transit instances
      will host Layer 7 IPS/IDS software.

    2. Use host-based IPS/IDS inspection on the end servers.

    3. Deploy an inline IPS/IDS instance in each VPC and add an entry in the route table to
      point to the Amazon EC2 instance as the default gateway.

    4. Use AWS WAF as an inline gateway for all hybrid traffic.

  3. You have set up a transit Virtual Private Cloud (VPC) architecture and want to connect the
    spoke VPCs to the hub VPC. What termination endpoint should you choose on the spokes,
    considering the least management overhead?

    1. Virtual Private Gateway (VGW)

    2. Amazon Elastic Compute Cloud (Amazon EC2) instance

    3. VPC peering gateway

    4. Internet gateway

  4. You are tasked with setting up IP Security (IPsec) Virtual Private Network (VPN) connec-
    tivity between your on-premises data center and AWS. You have an application on-premises
    that will exchange sensitive control information to an Amazon Elastic Compute Cloud
    (Amazon EC2) instance in the Virtual Private Cloud (VPC). This traffic should take prior-
    ity in the VPN tunnel over all other traffic. How will you design this solution, considering
    the least management overhead?

    1. Terminate a VPN connection on an Amazon EC2 instance loaded with a software
      supporting Quality of Service (QoS) and use Differentiated Services Code Point (DSCP)
      markings to give priority to the application traffic as it sent and received over the VPN
      tunnel.

    2. Terminate VPN on a Virtual Private Gateway (VGW) and use DSCP markings to give
      priority to the application traffic as it is sent and received over the VPN tunnel.

      Review Questions 395


    3. Terminate a VPN connection on two Amazon EC2 instances. Use one instance for sen-
      sitive control information and the other instance for the rest of the traffic.

    4. Move the sensitive application to a separate VPC. Create separate VPN tunnels to
      these VPCs.

  5. Which of the following endpoints can be accessed over AWS Direct Connect?

    1. Network Address Translation (NAT) gateway

    2. Internet gateway

    3. Gateway Virtual Private Cloud (VPC) endpoints

    4. Interface VPC endpoints

  6. You have to set up an AWS Storage Gateway appliance on-premises to archive all of your
    data to Amazon Simple Storage Service (Amazon S3) using the file gateway mode. You have
    AWS Direct Connect connectivity between your data center and AWS. You have set up a
    private Virtual Interface (VIF) to a Virtual Private Cloud (VPC), and you want to use that
    for sending all traffic to AWS. How will you architect this?

    1. Set up a Squid HTTP proxy on an Amazon Elastic Compute Cloud (Amazon EC2)
      instance in the VPC. Configure the storage gateway to use this proxy.

    2. Set up a storage gateway appliance in the VPC and use that as a gateway.

    3. Create an IP Security (IPSec) Virtual Private Network (VPN) tunnel between the stor-
      age gateway and the VPC over a private VIF.

    4. Configure the storage gateway to use a VPC private endpoint on the VPC.

  7. You have a hybrid IT application that requires access to Amazon DynamoDB. You have set
    up AWS Direct Connect between your data center and AWS. All data written to Amazon
    DynamoDB should be encrypted as it is written to the database. How will you enable con-
    nectivity from the on-premises application to Amazon DynamoDB?

    1. Set up a public Virtual Interface (VIF).

    2. Set up a private VIF.

    3. Set up IP Security (IPsec) Virtual Private Network (VPN) over public VIF.

    4. Set up IPSec VPN over private VIF.

  8. You have a transit Virtual Private Cloud (VPC) set up with the hub VPC in us-east-1 and
    the spoke VPCs spread across multiple AWS Regions. Servers in the VPCs in Mumbai and
    Singapore are suffering huge latencies when connecting with each other. How do you re-
    architect your VPCs to maintain the transit VPC architecture and reduce the latencies in
    the overall architecture?

    1. Set up a local transit hub VPC in the Mumbai region. Connect the VPCs in Mumbai
      and Singapore to this hub. Set up an IP Security (IPsec) Virtual Private Network (VPN)
      over cross-region VPC peering between the two hubs.

    2. Set up a local transit hub in the Singapore region. Connect the VPCs in Mumbai and
      Singapore to this hub VPC. Set up a Generic Routing Encapsulation (GRE) VPN over
      cross-region VPC peering between the two hubs.

    3. Add transit Amazon Elastic Compute Cloud (Amazon EC2) instances in the us-east-1
      hub VPC dedicated to the traffic coming from the Mumbai and Singapore regions.

    4. Add a transit VPC hub in us-east-1. Connect the VPCs in Mumbai and Singapore to
      this new hub and then connect the two hubs using VPC peering.

      396 Chapter 12 Hybrid Architectures


  9. You have an application in a Virtual Private Cloud (VPC) that requires access to on-
    premises Active Directory servers for joining the company domain. How will you enable
    this setup, considering low latency for domain join requests?

    1. Set up a Virtual Private Network (VPN) terminating on a Virtual Private Gateway
      (VGW) attached to the VPC.

    2. Set up an AWS Direct Connect public Virtual Interface (VIF).

    3. Set up an AWS Direct Connect private VIF.

    4. Set up a VPN terminating on an Amazon Elastic Compute Cloud (Amazon EC2)
      instance in the VPC.

  10. Which of the following is a good use case for leveraging the transit Virtual Private Cloud
    (VPC) architecture?

    1. Allow on-premises resources access to any VPC globally in AWS.

    2. Allow on-premises resources access to Amazon Simple Storage Service (Amazon S3).

    3. Allow on-premises resources access to AWS resources while inspecting all traffic for
      compliance reasons.

    4. Allow on-premises resources access to other remote networks.

Review Questions 415


Review Questions

  1. You place an application load balancer in front of two web servers that are stateful. Users
    begin to report intermittent connectivity issues when accessing the website. Why is the site
    not responding?

    1. The website needs to have port 443 open.

    2. Sticky sessions must be enabled on the application load balancer.

    3. The web servers need to have their security group set to allow all Transmission Control
      Protocol (TCP) traffic from 0.0.0.0/0.

    4. The network Access Control List (ACL) on the subnet needs to allow a stateful
      connection.

  2. You create a new instance, and you are able to connect over Secure Shell (SSH) to its private
    IP address from your corporate network. The instance does not have Internet access, how-
    ever. Your internal policies forbid direct access to the Internet. What is required to enable
    access to the Internet?

    1. Assign a public IP address to the instance.

    2. Ensure that port 80 and port 443 are not set to DENY in the instance security group.

    3. Deploy a Network Address Translation (NAT) gateway in the private subnet.

    4. Ensure that there is a default route in the subnet route table that goes to your
      on-premises network.

  3. You create a Network Address Translation (NAT) gateway in a private subnet. Your
    instances cannot communicate with the Internet. What action must you take?

    1. Add a default route out to the Internet gateway.

    2. Ensure that outbound traffic is allowed on port 80 and port 443.

    3. Delete the NAT gateway and deploy it in a public subnet.

    4. Place the instances in a public subnet.

  4. What is not required for Internet connectivity from a public subnet?

    1. Public IP

    2. Network Address Translation (NAT) gateway

    3. Outbound rule in a security group

    4. Inbound rule in the network Access Control List (ACL)

    5. Outbound rule in the network ACL

    6. An Internet gateway

    7. A default route to an Internet gateway

      416 Chapter 13 Network Troubleshooting


  5. You are trying to add two new Virtual Private Cloud (VPC) peering connections to a VPC
    with 24 existing peering connections. The first connection works fine, but the second con-
    nection returns an error message. What should you do?

    1. Submit a request to AWS Support to have your VPC peer limit increased.

    2. Select another AWS Region to set up the VPC peering connection.

    3. Retry the request again; the error may go away.

    4. Deploy a Virtual Private Network (VPN) instance to connect the VPC.

  6. You created a new endpoint for your Virtual Private Cloud (VPC) that does not have Inter-
    net connectivity. Your instance cannot connect to Amazon Simple Storage Service (Amazon
    S3). What could be the problem?

    1. There is no route in your route table to the Amazon S3 VPC endpoint.

    2. The Amazon S3 bucket is in another region.

    3. Your bucket access list is not properly configured.

    4. The VPC endpoint does not have the proper AWS Identity and Access Management
      (IAM) policy attached to it.

    5. All of the above

  7. You recently set up Amazon Route 53 for a private hosted zone for a highly-available
    application hosted on AWS. After adding a few A records, you notice that the instance
    hostnames are not resolving within the Virtual Private Cloud (VPC). What actions should
    be taken? (Choose two.)

    1. Allow port 53 on the instance security group.

    2. Create a Dynamic Host Configuration Protocol (DHCP) option set.

    3. Set enableDnsHostnames to true on the VPC.

    4. Set enableDnsSupport to true on the VPC.

  8. You discover that the default Virtual Private Cloud (VPC) has been deleted from region

    us-east-1 by a coworker in the morning. You will be deploying a lot of new services during
    the afternoon. What should you do?

    1. It’s not important, so no action is required.

    2. Designate a VPC that you create as the default VPC.

    3. Create an AWS Support ticket to have your VPC re-created.

    4. Perform an Application Programming Interface (API) call or go through the AWS
      Management Console to create a new default VPC.

      Review Questions 417


  9. You are responsible for your company’s AWS resources. You notice a significant amount
    of traffic from an IP address range in a foreign country where your company does not
    have customers. Further investigation of the traffic indicates that the source of the traf-
    fic is scanning for open ports on your Amazon Elastic Compute Cloud (Amazon EC2)
    instances. Which one of the following resources can prevent the IP address from reaching
    the instances?

    1. Security group

    2. Network Address Translation (NAT) gateway

    3. Network Access Control List (ACL)

    4. A Virtual Private Cloud (VPC) endpoint

  10. Which of the following tools can be used to record the source and destination IP addresses
    of traffic? (Choose two.)

    1. Flow logs

    2. Packet capture on an instance

    3. AWS CloudTrail

    4. AWS Identity and Access Management (IAM)

432 Chapter 14 Billing


Review Questions

  1. You have two Amazon Elastic Compute Cloud (Amazon EC2) instances in two different
    Virtual Private Clouds (VPCs) that have a peering connection. Both VPCs are in the same
    Availability Zone. What charge will you see on your bill for data transfer between those
    two instances?

    1. $0.00 per GB in each direction

    2. $0.01 per GB in each direction

    3. $0.02 per GB in each direction

    4. $0.04 per GB in each direction

  2. Which of the following statements regarding data transfer into Amazon Simple Storage
    Service (Amazon S3) is not true?

    1. Data transfer from a non-AWS public IP to Amazon S3 is not charged.

    2. Data transfer from Amazon Elastic Compute Cloud (Amazon EC2) in us-west-2 to an
      Amazon S3 bucket in eu-west-1 is not charged.

    3. Data transfer from Amazon EC2 to Amazon S3 in the same region is not charged.

    4. Data transfer from Amazon S3 to an Amazon CloudFront edge location is not charged.

  3. You elect to use an AWS Direct Connect public Virtual Interface (VIF) to carry an IP Secu-
    rity (IPsec) Virtual Private Network (VPN) from your Virtual Private Cloud (VPC) Virtual
    Private Gateway (VGW) to your customer gateway. What rate is charged for all of the data
    transfer over the VPN?

    1. $0.00 per GB

    2. $0.020 per GB

    3. $0.05 per GB

    4. $0.09 per GB

  4. Which of the following types of data transfer is not charged?

    1. From Amazon Elastic Compute Cloud (Amazon EC2) in eu-west-1 to Amazon Simple
      Storage Service (Amazon S3) in us-east-1

    2. From your on-premises data center to Amazon S3 in us-east-1

    3. From Amazon EC2 in eu-west-1 to your on-premises data center

    4. From Amazon S3 in us-east-1 to Amazon EC2 in eu-west-1

  5. You want to receive an email in advance if it is likely that your monthly charge will exceed

    $200. Which is the most appropriate mechanism to generate this notification?

    1. Create a billing alarm in Amazon CloudWatch.

    2. Create a budget.

    3. Enable Cost and Usage reporting.

    4. Access your billing console.

      Review Questions 433


  6. After creating an AWS Direct Connect connection, what is the earliest point in time that
    you start receiving port-hour charges?

    1. 90 days from creation

    2. When the connection becomes available for the first time

    3. Once you have transferred 100 MB of data

    4. When a Virtual Interface (VIF) is created

  7. Which of the following is not used for billing of the Network Address Translation (NAT)
    gateway?

    1. NAT gateway hourly charge

    2. NAT gateway data processing charge

    3. Active session charge

    4. Data transfer charge

  8. Which of the following is the charge for data transfer out from Amazon Simple Storage
    Service (Amazon S3) to Amazon CloudFront?

    1. $0.000 per GB

    2. $0.010 per GB

    3. $0.020 per GB

    4. Varies by edge location

  9. When using a public Virtual Interface (VIF) on AWS Direct Connect, you access an
    Amazon Simple Storage Service (Amazon S3) bucket owned by someone who is not part of
    your organization. Who pays for data transfer from that bucket?

    1. The owner of the AWS Direct Connect connection

    2. The Amazon S3 bucket owner

    3. The owner of the public VIF

    4. No one; it is not charged.

  10. You make a connection from an Amazon Elastic Compute Cloud (Amazon EC2) instance
    that you own to the public IP address for another Amazon EC2 instance in your account.
    Both instances are in the same Availability Zone. How much does this cost in us-east-1?

    1. Nothing; data transfer is not charged within the same Availability Zone

    2. $0.010 per GB in each direction

    3. $0.090 per GB in each direction

    4. Nothing in one direction; $0.090 per GB in the other direction

464 Chapter 15 Risk and Compliance


Review Questions

  1. Amazon Virtual Private Cloud (Amazon VPC) Flow Logs reports accept and reject data
    based on which VPC features? (Choose two.)

    1. Security groups

    2. Elastic network interfaces

    3. Network Access Control Lists (ACLs)

    4. Virtual routers

    5. Amazon Simple Storage Service (Amazon S3)

  2. What is the minimum runtime for Amazon Inspector when initiated from the AWS
    Console?

    1. 1 minute

    2. 5 minutes

    3. 10 minutes

    4. 15 minutes

  3. Compliance documents are available from which of the following?

    1. AWS Artifact on the AWS Management Console

    2. Compliance portal on the AWS website

    3. Services in Scope page on the AWS website

    4. AWS Trusted Advisor on the AWS Management Console

  4. AWS Identity and Access Management (IAM) uses which access model?

    1. Principal, Action, Resource, Condition (PARC)

    2. Effect, Action, Resource, Condition (EARC)

    3. Principal, Effect, Resource, Condition (PERC)

    4. Resource, Effect, Action, Condition, Time (REACT)

  5. Which hash algorithm is used for AWS CloudTrail record digests?

    1. SHA-256

    2. MD5

    3. RIPEMD-160

    4. SHA-3

  6. Penetration requests may be submitted to AWS by which means?

    1. Postal mail

    2. Email

    3. Social media

    4. AWS Support

      Review Questions 465


  7. What is the maximum duration of an AWS penetration testing authorization?

    1. 24 hours

    2. 48 hours

    3. 30 days

    4. 90 days

  8. Who is responsible for network traffic protection in Amazon Virtual Private Cloud
    (Amazon VPC)?

    1. AWS

    2. The customer

    3. It is a shared responsibility.

    4. The network provider

  9. What authorization feature can restrict the actions of an account’s root user?

    1. AWS Identity and Access Management (IAM) policy

    2. Bucket policy

    3. Service Control Policy (SCP)

    4. Lifecycle policy

  10. Which AWS Cloud service provides information regarding common vulnerabilities and
    exposures?

    1. AWS CloudTrail

    2. AWS Config

    3. AWS Artifact

    4. Amazon Inspector

Review Questions 481


Review Questions

  1. Which Amazon Route 53 routing policy would be the most appropriate for gradually
    migrating an application to AWS?

    1. Weighted

    2. Latency-based

    3. Failover

    4. Geolocation

  2. When connecting an on-premises network to AWS, which option reuses existing network
    equipment and Internet connections?

    1. VPN connection

    2. AWS Direct Connect

    3. VPC Private Endpoints

    4. Network Load Balancer

  3. Which Amazon Route 53 routing policy would be the most appropriate for directing users
    to application resources that offer payment in their local currency?

    1. Weighted

    2. Latency-based

    3. Failover

    4. Geolocation

  4. Your current web application’s network security architecture includes an Application Load
    Balancer, locked down Security Groups, and restrictive VPC route tables. You have been
    asked to implement additional controls for temporarily blocking hundreds of noncontiguous,
    malicious IP addresses. Which AWS service or features should you add to this architecture?

    1. AWS WAF

    2. Network ACLs

    3. AWS Shield

    4. Amazon VPC Private Link

  5. A previous network administrator implemented a transit VPC architecture using Amazon
    EC2 instances with 10 GB networking to facilitate communication between multiple AWS
    VPCs in various regions and on-premises resources. Over time, the transit VPC Amazon
    EC2 instance network bandwidth has become saturated with on-premises traffic, causing
    application requests to fail. What design recommendations can you make to reduce
    application failures?

    1. Implement AWS Direct Connect and migrate to a AWS Direct Connect gateway.

    2. Enable SR-IOV on your transit VPC instance ENIs.

    3. Offload network traffic to Private Link to facilitate connectivity with on-premises resources.

    4. Upgrade from 10 GB Amazon EC2 instances to 25 GB instances with ENA.

      482 Chapter 16 Scenarios and Reference Architectures


  6. A previous network administrator implemented a transit VPC architecture to facilitate
    communication between multiple AWS networks and on-premises resources. Over time,
    the transit VPC Amazon EC2 instance network bandwidth has become saturated with
    cross-region traffic. What highly available design change should you recommend for this
    network?

    1. Migrate cross-region traffic to a point-to-point VPN connection between an Amazon
      EC2 instance in each VPC.

    2. Disable route propagation on your VPC route tables to disable cross-region traffic.

    3. Leverage VPC Peering connections between VPCs across regions.

    4. Implement network ACLs to rate limit cross-region traffic.

  7. You support an application that is hosted in ap-northeast-1 and eu-central-1. Users from
    around the word sometimes complain about long page-load times. Which Amazon Route
    53 routing policy would provide the best user experience?

    1. Weighted

    2. Latency-based

    3. Failover

    4. Geolocation

  8. When connecting an on-premises network to AWS APIs, which option provides the least
    amount of network jitter and latency?

    1. VPN connection

    2. AWS Direct Connect private VIF

    3. AWS Direct Connect public VIF

    4. VPC Endpoints

  9. Which combination of Amazon Route 53 policies provide location-specific services with
    redundant, backup connections? (Choose two.)

    1. Weighted

    2. Latency-based

    3. Failover

    4. Geolocation

    5. Simple

  10. What is a scalable way to provide Amazon EC2 instances in a private subnet with IPv4
    egress access to the Internet with no need for network administration?

    1. Create a transit VPC with network address translation for all your VPCs.

    2. Create an egress-only Internet Gateway.

    3. Create multiple Amazon EC2 NAT instances in each Availability Zone.

    4. Create NAT Gateways.

      Review Questions 483


  11. Your users have started to complain about poor application performance. You determine
    that your on-premises VPN connection is saturated with authentication and authorization
    traffic to the on-premises Microsoft Active Directory (AD) environment. Which option will
    reduce on-premises network traffic?

    1. Replicate Microsoft AD to Amazon EC2 instances in a shared service network and
      migrate to VPC Peering connections.

    2. Migrate from a VPN connection to multiple AWS Direct Connect connections.

    3. Create a trust relationship between AWS Directory Service and your on-premises
      Microsoft AD and migrate to VPC Peering connections.

    4. Offload network traffic to Private Link to facilitate connectivity with Microsoft AD
      on-premises.


Answers to Review
Questions

Appendix

image

486 Appendix Answers to Review Questions


Chapter 1: Introduction to Advanced
Networking

  1. B. AWS Direct Connect provides private connectivity between customer environments and
    AWS.

  2. C. Amazon CloudFront is a Content Distribution Network (CDN) that operates from AWS
    edge locations.

  3. D. AWS Regions contain two or more Availability Zones. Availability Zones contain one
    or more data centers. Edge locations are located throughout the Internet.

  4. D. AWS Regions contain two or more Availability Zones. Availability Zones contain one
    or more data centers. A region contains a cluster of two or more data centers.

  5. A. AWS Regions contain two or more Availability Zones. Availability Zones contain one
    or more data centers. If you distribute your instances across multiple Availability Zones and
    one instance fails, you can design your application so that an instance in another zone can
    handle requests.

  6. C. Amazon Virtual Private Cloud (Amazon VPC) allows customers to create a logically-
    isolated network within an AWS Region.

  7. A. AWS Shield provides DDoS mitigation. AWS Shield Standard is available to all
    customers at no additional charge.

  8. A. The AWS global infrastructure is operated by a single company, Amazon.

  9. B. Amazon VPC is an isolated, logical portion of an AWS Region that you define.

  10. B. The mapping service maintains topology information about every resource in a VPC.

  11. D. When you create an Amazon VPC, you choose the IPv4 address range to use. You may
    optionally enable IPv6 on your Amazon VPC.

  12. B. Amazon Route 53 is a managed Domain Name System (DNS) service. You may register
    domains using Amazon Route 53.

  13. A. AWS Direct Connect lets you create a dedicated network connection between your
    location and AWS. AWS Direct Connect provides a more consistent network experience
    than the Internet.

  14. C. AWS WAF allows you to create web Access Control Lists (ACLs) to protect your
    Amazon CloudFront and Elastic Load Balancing (for example, Application Load Balancer)
    environments.

  15. B. Elastic Load Balancing provides application traffic distribution among healthy Amazon
    EC2 instances in your Amazon Virtual Private Cloud (Amazon VPC).

    Chapter 3: Advanced Amazon Virtual Private Cloud (Amazon VPC) 487


    Chapter 2: Amazon Virtual Private
    Cloud (Amazon VPC) and Networking
    Fundamentals

    1. C. You need two public subnets (one for each Availability Zone) and two private subnets
      (one for each Availability Zone). Therefore, you need four subnets.

    2. B. The NAT gateway uses an IPv4 Elastic IP address when it performs many-to-one address
      translation. In order for the traffic to route to the Internet, the NAT gateway must be placed
      in a public subnet with a route to an Internet gateway.

    3. D. Placement groups are designed to provide the highest performance network between
      Amazon Elastic Compute Cloud (Amazon EC2) instances.

    4. A. When you create an Amazon VPC, a route table is created by default. You must manu-
      ally create subnets and an Internet gateway.

    5. A. You may only have one Internet gateway for each Amazon VPC.

    6. B. Security groups are stateful, whereas network ACLs are stateless.

    7. D. A customer gateway is the customer side of a VPN connection, and an Internet gateway
      connects a network to the Internet. A Virtual Private Gateway (VGW) is the Amazon side
      of a VPN connection.

    8. D. Attaching an elastic network interface associated with a different subnet to an instance
      can make the instance dual-homed.

    9. C. Each Amazon VPN connection provides two IPsec tunnel endpoints.


Chapter 3: Advanced Amazon Virtual
Private Cloud (Amazon VPC)

  1. D. VPC endpoints are private access to otherwise public services. This access method does
    not decrease performance or increase availability. In addition, the services are still available
    through public APIs unless service-specific configurations, such as Amazon Simple Storage
    Service (Amazon S3) bucket policies, have been configured to limit access to VPC endpoints.

  2. D. This is expected behavior when you limit access to a VPC endpoint. It is possible that
    a proxy also blocks access. The objects are still there. The VPC endpoint policy does

    not have a condition that applies specifically to the console, and endpoint policies do not
    restrict which resources can access buckets. In order to enable access to Amazon S3 buckets
    through the AWS Management Console, you must allow public access.

    488 Appendix Answers to Review Questions


  3. C, D. AWS PrivateLink applies source Network Address Translation (NAT), so the source
    IP will not be natively available. VPC peering allows bidirectional communication, but it
    does not allow better performance or scalability. AWS PrivateLink is unidirectional only.
    AWS PrivateLink does support more spoke VPCs than VPC peering. AWS PrivateLink will
    not increase the performance; that only comes from adding more resources.

  4. A, B. AWS PrivateLink only supports TCP traffic. It is possible to use the IPv4 address of
    an AWS PrivateLink endpoint as opposed to the DNS name. There is no inherit authentica-
    tion for VPC endpoints, other than what is defined at an application level. You cannot cre-
    ate a VPN through an AWS PrivateLink because it does not support IPsec.

  5. A, D. DNS must be enabled for Amazon S3 endpoints to function. Amazon S3 endpoints
    do not require IP addresses. Endpoints also are not affected by private or public subnets.
    Amazon S3 endpoints do require a route in the routing table.

  6. B, E. Inbound security groups do not define outbound policy. In addition, the NAT
    instance could have an iptables rule or similar firewall rule for 8080. It is possible for NAT
    instances to run out of ports, but it is nearly impossible for multiple instances to simultane-
    ously run out of ports for 8080 because they support 65,000 ports. Network ACLs inbound
    block inbound ports, not outbound ports in this case. It is also possible for the server to be
    blocking the addresses or method you are using to access port 8080.

  7. C. Transitive routing prevents instances from communicating across transitively peered VPCs.
    If instances are configured to use a proxy, then the destination IP on each hop is an instance in
    the peered VPC. You cannot define a route to a network interface in a peered VPC.

  8. C, D. AWS PrivateLink does not use prefix lists. Instances do not need additional interfaces
    to use VPC endpoints. Instances do need to support DNS and to use the correct entry. Secu-
    rity groups can block access to private services. A route table with AWS PrivateLink will
    not have IP addresses.

  9. A, D. You cannot create new CIDR ranges if you are at the maximum allowed routes. Sub-
    nets and VPCs do not affect new CIDR ranges. There are limitations on valid CIDR ranges
    based on the original CIDR range defined. Other VPCs do not create dependencies on add-
    ing. The VPC is new and so would not be peered with any other VPCs.

  10. D. The routing, subnets, and new CIDR range are valid. New CIDR ranges must be more spe-
    cific than existing routes, which is the case here. CIDR ranges do not need to be contiguous.

  11. C, D. AWS PrivateLink can scale to this use case, as well as provide central services.
    Another option is to access these services over the Internet, provided that authentication
    and encryption are strong. VPC peering does not work with thousands of VPCs. Security
    groups cannot be referenced without an associated peering connection. You cannot create a
    VPN between two VGWs because neither will initiate a connection.

  12. C. You cannot add different RFC1918 CIDR ranges to an existing VPC, and you also can-
    not use new CIDR ranges on existing subnets. In addition, NAT Gateways will not support
    custom NAT. The only option presented that works is peering to a new VPC.

  13. B. This is a test of transitive routing rules. The only connection that has an external source
    from the perspective of VPC routing and an external destination is the virus scan. Traffic
    within the VPN stays on the instance and can route. The API request is sourced from

    Chapter 4: Virtual Private Networks 489


    an instance in the peered VPC and the destination is an instance. While the web request
    appears to be an external source and destination, the packet is tunneled, so VPC sees it as a
    new flow, where the source is the network interface of the VPN server.

  14. A, D. The Network Load Balancer and interface VPC endpoints are accessible over AWS Direct
    Connect. Gateway VPC endpoints require a proxy. The AWS metadata service isn’t a network
    interface, so it could work through a proxy but would return results specific to the proxy.

  15. C. The one large VPC approach and the replication approach do not meet the organiza-
    tional requirements. Cross-account network interfaces will not scale, and you do not route
    code. This leaves AWS PrivateLink, which provides scalability and meets the requirements.

  16. A, C. Auto-assigned addresses are not eligible for recall. You can only recall Elastic IP addresses
    the account has owned. Tagging is not necessary. It is possible to recall Elastic IP addresses in
    some scenarios. The Elastic IP address is not related to an instance number because it won’t be
    automatically associated with an instance but rather returned to the account.


Chapter 4: Virtual Private Networks

  1. A, E. VGW is the managed VPN endpoint for your Amazon VPC. Alternatively, you can
    terminate VPN on an Amazon EC2 instance.

  2. B. Two tunnels are required: one to each of the Virtual Private Gateway’s (VGW) endpoints.

  3. B, C. When you create a dynamic tunnel, BGP is used. When you create a static tunnel,
    static routes are used.

  4. D. In an Amazon EC2-based VPN termination option, you are responsible for maintaining
    all infrastructure from the operating system level up. AWS is responsible for maintaining
    the underlying hardware and Hypervisor.

  5. A. The Source/Destination Check attribute controls whether source destination checking is
    enabled on the instance. Disabling this attribute enables an instance to handle network traf-
    fic that isn’t specifically destined for the instance. Because this Amazon EC2 instance will
    handle and route traffic to all Amazon EC2 instances in the VPC in this case, this check
    has to be disabled.

  6. B. Unlike site-to-site VPN, AWS currently doesn’t offer a managed gateway endpoint for
    this type of VPN setup. You will have to use an Amazon EC2 instance as a client-to-site
    VPN gateway.

  7. C. SSL or Transport Layer Security (TLS) works at the application layer and encrypts all
    TCP traffic. SSL is a more efficient algorithm than IPsec and is easier to deploy/use. By
    using SSL, you can also encrypt only the traffic for the application that requires it, whereas
    with IPsec all traffic is encrypted. Option D is incorrect as it covers encryption at rest while
    the question is about achieving encryption in motion.

  8. A. The IP addresses of the VGW endpoints are automatically generated. These IP addresses
    are used to terminate the VPN connections.

490 Appendix Answers to Review Questions


Chapter 5: AWS Direct Connect

  1. C. The VGW provides connectivity to your Amazon VPC. The Internet gateway provides
    access to the Internet. VPC endpoints are for specific AWS Cloud services. A peering con-
    nection is used to connect to other VPCs.

  2. A. AWS Direct Connect requires the use of BGP to exchange routing information.

  3. D. One is the minimum number of connections in a LAG.

  4. D. AWS Direct Connect supports public and private VIFs.

  5. A. Each AWS Direct Connect location has a minimum of two devices for resilience, mean-
    ing that a resilient connection can be established at a single location if desired.

  6. C. One hundred prefixes can be announced over a private VIF.

  7. A. A LAG behaves as a single Layer 2 connection. Each provisioned (VIF) spans the LAG
    but requires only a single BGP session.

  8. B. Local routes to the VPC are always the highest priority route. Amazon VPC does not
    allow you to have more specific routing than the VPC Classless Inter-Domain Routing
    (CIDR) range.

  9. B. A customer can define and allocate a VIF to another AWS account. This configuration is
    a hosted VIF.

  10. D. The only mechanism to stop billing on an AWS Direct Connect connection is to delete
    the connection itself. Even with all the VIFs deleted, you are still charged the port-hour fees
    for the connection.


Chapter 6: Domain Name System and
Load Balancing

  1. A, E. There are two types of hosted zones: private and public. A private hosted zone is a
    container that holds information about how you want to route traffic for a domain and its
    subdomains within one or more Amazon VPCs. A public hosted zone is a container that
    holds information about how you want to route traffic on the Internet for a domain.

  2. D. Amazon Route 53 can route queries to a variety of AWS resources. It is important
    to know what resources are not applicable, such as AWS CloudFormation and AWS
    OpsWorks.

  3. C. If you want to stop sending traffic to a resource, you can change the weight for that
    record to 0.

    Chapter 7: Amazon CloudFront 491


  4. A. If you associate a health check with a multivalue answer record, Amazon Route53
    responds to Domain Name System (DNS) queries with the corresponding IP address only
    when the health check is healthy. If you do not associate a health check with a multivalue
    answer record, Amazon Route53 always considers the record to be healthy.

  5. D. You get access to Amazon Route 53 traffic flow through the AWS Management
    Console. The console provides you with a visual editor that helps you create complex
    decision trees.

  6. D. You can enable this function using a multivalue answer routing policy.

  7. A. Classic Load Balancer and Application Load Balancer IP addresses may change as the
    load balancers scale. Referencing them by their IP addresses instead of DNS names may
    result in some load balancer endpoints being underutilized or sending traffic to incorrect
    endpoints.

  8. B. When the enableDnsHostname attribute is set to true, Amazon will auto-assign DNS
    hostnames to Amazon EC2 instances.

  9. D. enableDnsHostnames indicates whether the instances launched in the VPC will receive
    a public DNS hostname.
    enableDnsSupport indicates whether the DNS resolution is sup-
    ported for the VPC. Both must be set to true for your Amazon EC2 instances to receive
    DNS hostnames within your VPC.

  10. B. Network Load Balancer has support for static IP addresses for the load balancer. You
    can also assign one Elastic IP address per Availability Zone enabled for the load balancer.


Chapter 7: Amazon CloudFront

  1. C. A CDN is a globally distributed network of caching servers that speed up the down-
    loading of web pages and other content. CDNs use DNS geolocation to determine the geo-
    graphic location of each request for a web page or other content.

  2. D. If the content is already in the edge location with the lowest latency, Amazon Cloud-
    Front delivers it immediately. If the content is not currently in that edge location, Amazon
    CloudFront retrieves it from the origin server to deliver.

  3. A, B, C. Amazon CloudFront is optimized to work with other AWS Cloud services as the
    origin server, including Amazon S3 buckets, Amazon S3 static websites, Amazon EC2
    instances, and Elastic Load Balancing load balancers. Amazon CloudFront also works
    seamlessly with any non-AWS origin server, such as an existing on-premises web server.

  4. B. Objects expire from the cache after 24 hours by default.

  5. D. This feature removes the object from every Amazon CloudFront edge location regard-
    less of the expiration period that you set for that object on your origin server.

  6. A. You control which requests are served by which origin and how requests are cached
    using a feature called cache behaviors.

    492 Appendix Answers to Review Questions


  7. D. When streaming with Amazon CloudFront and using either of those protocols, Amazon
    CloudFront will break video into smaller chunks that are cached in the Amazon Cloud-
    Front network for improved performance and scalability.

  8. C. When you add alternate domain names, you can use the wildcard * at the beginning of
    a domain name instead of specifying subdomains individually.

  9. D. To use an ACM certificate with Amazon CloudFront, you must request or import the
    certificate in the US East (N. Virginia) Region.

  10. D. To invalidate objects, you can specify either the path for individual objects or a path
    that ends with the
    * wildcard, which might apply to one object or many objects.

  11. B. Amazon CloudFront can create log files that contain detailed information about every
    user request that Amazon CloudFront receives. Access logs are available for both web and
    Real-Time Messaging Protocol (RTMP) distributions. When you enable logging for your
    distribution, you specify the Amazon S3 bucket in which you want Amazon CloudFront to
    store log files.


Chapter 8: Network Security

  1. B. AWS Organizations includes an account creation Application Programming Interface
    (API) that adds new accounts to the organization.

  2. D. An AWS CloudFormation template contains the textual definition of your environment
    in JSON or YAML format. When you instantiate a template, it is called a stack.

  3. A, B. Removing the human element with respect to creating, operating, managing, and
    decommissioning your AWS environment significantly contributes to overall security.
    People make mistakes, people bend the rules, and people can act with malice.

  4. C. Amazon Route 53 stripes its Name Servers across four TLD servers to mitigate the
    impact of a TLD failure.

  5. B. Origin Access Identity (OAI) is a special Amazon CloudFront user that you can associ-
    ate with your Amazon S3 bucket to restrict access.

  6. B. AWS Certificate Manager uses AWS KMS to help protect the private key.

  7. C. AWS WAF integrates with Amazon CloudFront, Application Load Balancer, and
    Amazon Elastic Compute Cloud (Amazon EC2).

  8. C, D. AWS Shield Standard provides protection for all AWS customers against the most
    common and frequently occurring infrastructure (Layer 3 and Layer 4) attacks, like SYN/
    User Datagram Protocol (UDP) floods, reflection attacks, and others, to support high avail-
    ability of your applications on AWS.

    Chapter 9: Network Performance 493


  9. A. A VPC endpoint enables you to create a private connection between your Amazon VPC
    and another AWS Cloud service without requiring access over the Internet, through a NAT
    device, a VPN connection, or AWS Direct Connect.

  10. B. Security groups are stateful, whereas network ACLs are stateless.

  11. D. Amazon Macie is a security service that uses machine learning to automatically dis-
    cover, classify, and protect sensitive data in AWS.

  12. D. Amazon VPC Flow Logs is a feature that enables you to capture information about the
    IP traffic going to and from network interfaces in your VPC.

  13. A. Configure an Amazon CloudWatch scheduled event to call an AWS Lambda function
    each hour. The AWS Lambda function processes the threat intelligence data and populates
    an AWS WAF condition. The AWS WAF is associated with the Application Load Balancer.


Chapter 9: Network Performance

  1. A, D. NAT gateways are capable of higher performance than NAT instances. Trying a
    larger instance type can increase bandwidth capacity to the private subnet instances. Ama-
    zon Linux has enhanced networking enabled by default. Only one route can exist for any
    given prefix.

  2. D. Enhanced networking can help reduce jitter and network performance. Placement
    groups and lower latency will not assist with flows leaving the VPC. Network interfaces do
    not affect network performance. An Application Load Balancer will not assist with perfor-
    mance issues.

  3. B. Using more than one instance will increase the performance because any given flow to
    Amazon S3 will be limited to 5 Gbps. Moving the instance will not increase Amazon S3
    bandwidth. Placement groups will not increase Amazon S3 bandwidth either. Amazon S3
    cannot be natively placed behind a Network Load Balancer.

  4. A, B. R4 instances use network Input/Output (I/O) credits that allow higher bandwidths
    when credits are available, which may affect baseline performance tests. In addition, the
    database may have other application-level impacts on the performance of the TCP stream.

  5. A, C. Operating systems must support the appropriate network driver for the correct
    instance type. The AMI or instance must be flagged for enhanced networking support in
    addition to having driver support.

  6. D. Jumbo frames are not supported over the Internet, and VPN will not increase through-
    put. Increasing the packets per second will most likely reduce throughput. There are
    additional measures that could be taken instead, such as tweaking operating system Trans-
    mission Control Protocol (TCP) stacks, using network accelerators, or changing application
    mechanics.

    494 Appendix Answers to Review Questions


  7. C. Placement groups will provide more benefit than other features for applications such as
    High Performance Computing (HPC) that are extremely sensitive to latency and throughput.

  8. C. Distribute flows across many instances to ensure that the bandwidth of any given flow
    or instance does not limit overall performance. Enhanced networking can assist with per-
    formance, but does not increase scale. BGP and VPC routing also do not increase the scale
    of data transfer.

  9. A. Amazon EBS Provisioned IOPS will help reduce latency and create more consistent disk
    performance.

  10. C. Jitter is the variance in delay between packets. You can reduce jitter by making delay
    more consistent. Enhanced networking and eliminating CPU or disk bottlenecks can help
    reduce jitter.

  11. C, D. C4 instances support the Intel Virtual Function driver, and C5 instances support the
    ENA driver. In addition, the instance must be flagged for enhanced networking. There are
    no specific instance routes in an Amazon Virtual Private Cloud (Amazon VPC).

  12. D. If your throughput is lower, increasing the MTU in your Amazon Virtual Private Cloud
    (Amazon VPC) can increase performance. Unless there are application issues, using the
    largest MTU available (9,001 bytes) will help increase performance. Jitter is not typically
    an issue for throughput. Amazon VPC will treat all packets fairly, without QoS. Using a
    Network Load Balancer per instance would be inefficient and reduce performance.

  13. A, C, E. AWS Direct Connect offers lower latency and more control over monitoring than
    VPN or Internet connections offer. QoS can be configured on the circuit connected to AWS
    Direct Connect, but not within the AWS networks. This typically means that the service
    provider network will honor Differentiated Services Code Point (DSCP) bits, but any egress
    packets from AWS will be dropped equally. Similarly, jumbo frames can be configured, but
    this would not offer any performance benefit because jumbo frames are only supported
    within an Amazon Virtual Private Cloud (Amazon VPC).

  14. A, D, E, G. Amazon CloudWatch metrics and host metrics will be the most efficient way to
    determine bottlenecks. Packet captures and the other options can help in some situations,
    but they are not the most efficient. Elastic network interfaces do not affect whether a work-
    load is network bound.

  15. A, D. VPN instances should support enhanced networking for the highest performance
    possible. IPsec as a protocol can reduce throughput, putting more pressure on both
    packets per second and bandwidth. The VGW is managed by AWS. IPsec as a protocol
    doesn’t function through a Network Load Balancer due to non-Transmission Control
    Protocol (TCP) protocols like Encapsulating Security Protocol (ESP) and User
    Datagram Protocol (UDP).

  16. B. A single placement group is specific to one Availability Zone, which would reduce avail-
    ability.

  17. A, C. It is important to support enhanced networking for instances with networking
    requirements. In addition, the instance sizes and families that the operating system supports
    will largely define its maximum throughput and bandwidth.

    Chapter 10: Automation 495


  18. D. The Network Load Balancer will be able to provide lower latency and faster scaling for
    TCP traffic than the Classic Load Balancer. Both the Application Load Balancer and Ama-
    zon CloudFront options require sharing the private key with others. You cannot configure
    enhanced networking on Elastic Load Balancing.

  19. C. Using AWS Direct Connect is the most accurate answer. AWS Direct Connect does
    not provide native encryption. VPN connections do not scale individually per connection.

    Latency is not something you can manage reliably with TCP tuning or network appliances.

  20. D. MTUs allow for applications to send more data per packet, which can increase through-
    put. Jumbo frames are enabled in a VPC by default and work outside of placement groups.

  21. C. DPDK is a set of libraries and tools used to reduce networking overhead in the operating
    system.

  22. D. Elastic network interfaces do not have an effect on network performance for any
    instance that supports enhanced networking.

  23. B. Multicast traffic requires Layer 2 switching and routing infrastructure that are not pres-
    ent in a VPC. It is best to redesign the application components and provide low latency with
    a placement group.

  24. C. Bandwidth is the maximum data transfer rate at any point in the network.

  25. D. TCP has congestion management protocols built-in and will adapt to traffic changes.
    UDP does not, so it will not natively adapt to changing network conditions.


Chapter 10: Automation

  1. C. AWS CloudFormation can detect syntax errors but not semantic errors. If a service call
    it makes returns an error, then the stack creation or update process stops. By default, AWS
    CloudFormation rolls back the stack to the previous state.

  2. A. AWS CloudFormation is not aware that the route must wait for the gateway attachment
    to finish first, so this dependency must be explicitly stated. The order of resources is irrel-
    evant in a template. Waiting may help reduce the errors, but it does not provide a guarantee
    and may make create or update operations unnecessarily slower.

  3. D. AWS CloudFormation deletes every resource except those that have a DeletionPolicy
    of Retain. It does not have a way to detect whether resources are in use (this may prevent
    a resource from being deleted, but AWS CloudFormation will still attempt to do so). Tags
    beginning with
    aws: cannot be altered by users.

  4. A, E. AWS CodePipeline can monitor AWS CodeCommit, public GitHub repositories,
    and ZIP file bundles on Amazon S3. Repositories stored elsewhere must be published to
    Amazon S3 as a ZIP bundle.

  5. C. The Amazon CloudWatch Logs agent can be installed on an instance to monitor log
    files. When data is added to a log file, the agent sends them to Amazon CloudWatch Logs
    where they can be aggregated into a single log group.

    496 Appendix Answers to Review Questions


  6. D. Parameters are the most straightforward way to make a template reusable. The other
    solutions can be made to work, but they introduce unnecessary complexity into the template.

  7. C. Creating a change set will show how a new template differs from the current stack state.
    Executing the change set ensures that only those changes are executed. The execution will
    be rejected if the stack changed since the change set was generated. Executing the template
    instead may overwrite intermediate changes. The
    ValidateTemplate API only verifies the
    syntactic correctness of the template. Approval actions are used with AWS CodePipeline—
    not AWS CloudFormation.

  8. A. A version control system such as Git provides a history of changes made to the source
    code and allows you to create branches for experimental development. Amazon S3 version-
    ing only allows linear changes and does not provide visualization capabilities. AWS
    CloudFormation and AWS CodePipeline do not record history.

  9. D. AWS CloudFormation cannot detect this semantic error. Resource creation is unordered
    except when there is a dependency, so the order in which the subnets are created is inde-
    terminate. When an error is encountered, AWS CloudFormation attempts to roll back the
    update.

  10. A. The stack policy can prevent resources from being modified, deleted, or replaced when
    a stack is updated. The IAM service role will also effectively do this, but it will also pro-
    hibit other subnets from being deleted.
    DeletionPolicy only applies when the AWS
    CloudFormation stack is being deleted—not the resource itself. Tags starting with
    aws:
    cannot be modified.


Chapter 11: Service Requirements

  1. C, D. Amazon AppStream 2.0 and Amazon WorkSpaces are both AWS Cloud services that
    support end-user connectivity into applications running within a VPC.

  2. B. There are two adapters connected to each WorkSpace instance: one in a customer
    Virtual Private Cloud (VPC) and another in an AWS-managed VPC.

  3. B, C. AWS Lambda requires NAT to connect to the Internet. Public IP addresses cannot be
    assigned to an AWS Lambda function.

  4. A, B, D. Internet connectivity is not a requirement for Amazon EMR; however, Amazon S3
    connectivity, DNS hostnames, and private IP addresses are required.

  5. D. AWS Lambda is an AWS Cloud service that allows for serverless code execution.

  6. B, C. A NAT gateway or public IP with an Internet gateway attached to the VPC is
    required for Internet connectivity within Amazon WorkSpaces. Both options require user
    configuration and are not set up by default.

  7. B. Amazon RDS is the AWS service that provides managed database instances.

  8. A. A Multi-AZ deployment requires two subnets in order to provide high availability for
    Amazon RDS.

    Chapter 12: Hybrid Architectures 497


  9. C. AWS Elastic Beanstalk can automatically provision and scale an infrastructure on
    behalf of a user.

  10. D. AWS Elastic Beanstalk deploys the infrastructure automatically. Custom Virtual Private
    Clouds (VPCs) and security groups can be used but are not required.

  11. A, B. AWS Database Migration Service (AWS DMS) facilitates replication between differ-
    ent database engines. Direct connectivity between the databases is not required.

  12. C. Amazon Redshift requires an IP for each node in the cluster, plus one additional IP for
    the leader node.

  13. D. Only the Amazon RDS hostname (or a CNAME to it) should be used to connect. It will
    be updated in the event of a failover.


Chapter 12: Hybrid Architectures

  1. A. An AWS Direct Connect public VIF allows private connectivity from on-premises to
    AWS Cloud services.

  2. B. Host-based IPS/IDS is a more scalable solution and does not impose challenges regard-
    ing high availability and throughput scalability that inline IPS/IDS gateways impose. It is
    also more cost effective because it does not require inline gateways to be run.

  3. A. VGW is a managed endpoint.

  4. A. You can use a VPN/routing software on the Amazon EC2 instance that supports packet
    manipulation based on QoS markings. Using separate Amazon EC2 VPN instances will not
    help because the traffic from the VPC to on-premises can only use one Amazon EC2 instance
    as a gateway. Using two VPCs will not work because the traffic from the VPC to the on-
    premises gateway will not have QoS and so will contend for the same router resources.

  5. D. Only interface VPC endpoints can be accessed over AWS Direct Connect.

  6. A. To send all traffic via a VPC, you will have to proxy all traffic via Amazon EC2
    instances. AWS Storage Gateway supports HTTP proxy in the file gateway mode.

  7. A. You can use a public VIF to access Amazon DynamoDB. You can use Amazon DynamoDB
    client libraries to encrypt traffic as it is being written to the database. VPN is not required.

  8. B. You can reduce latency by setting up a local hub in the Singapore region. Traffic would
    then flow from the spoke VPC in the Mumbai region to the hub in the Singapore region and
    then to the spoke in the Singapore region. GRE should be used over IPsec for reduced laten-
    cies because GRE does not encrypt data, resulting in faster packet processing.

  9. C. AWS Direct Connect private VIF will enable connectivity from on-premises Amazon
    EC2 instances to the on-premises Active Directory server.

  10. C. Transit VPC should not be used for basic hybrid IT connectivity. It should be leveraged
    only for special scenarios, such as inline packet inspection.

498 Appendix Answers to Review Questions


Chapter 13: Network Troubleshooting

  1. B. Sticky sessions will enable a session to be kept with the same web server to facilitate
    stateful connections.

  2. D. Because you can access the instance but not the Internet, there is not a default route to
    the Internet through the on-premises network.

  3. C. NAT gateways need to be in a public subnet to enable communication with the Internet.

  4. B. All but a NAT gateway is required for Internet connectivity from a public subnet.

  5. A. There is a limit of 25 VPC peering connections per VPC by default.

  6. E. Answers A through D are all possible misconfigurations.

  7. C, D. Both Domain Name System (DNS) settings must be enabled on a VPC for a private
    hosted zone to work correctly.

  8. D. Some AWS Cloud services rely on the existence of a default VPC. There is an option to
    create a new default VPC.

  9. C. Network ACL rules can deny traffic.

  10. A, B. Flow logs and packet captures are two ways to record the source and destination IP
    addresses of traffic.


Chapter 14: Billing

  1. C. Peering carries a $0.01 per-GB charge for traffic leaving or entering a VPC; therefore,
    a single flow would cost $0.02 in each direction. Being in the same Availability Zone does
    not affect pricing.

  2. B. Because the data transfer is to another region, you will be charged for egress from the
    source region.

  3. B. The VGW IPsec endpoints are considered AWS public IPs, and the resource is owned by
    you. The reduced AWS Direct Connect rate applies because of these factors.

  4. B. Your on-premises data center is not within AWS public IP address range, so data trans-
    fer is metered as Internet-in, which is not charged.

  5. B. Budgets enable forecasting and allow you to set alarms to trigger on current billing.

  6. B. Charges start when the connection becomes available for the first time, or 90 days from
    creation, whichever occurs first.

  7. C. Active session charge is used as a component of Load Balancer Capacity Units (LCUs) in
    Elastic Load Balancing, not NAT gateway.

    Chapter 16: Scenarios and Reference Architectures 499


  8. A. Data transfer from Amazon S3 to Amazon CloudFront is not charged.

  9. B. The bucket owner always pays for data transfer from their bucket. In this particular
    example, they pay Internet-out rates.

  10. B. The Availability Zone does not affect the pricing when communicating via public IP, so
    the charge is at the regional data transfer rate.


Chapter 15: Risk and Compliance

  1. B, D. Security groups and network ACLs permit or deny traffic. These determinations are
    reflected in Amazon VPC Flow Log data.

  2. D. Amazon Inspector supports evaluation durations between 15 minutes and 24 hours.

  3. A. AWS Artifact provides on-demand access to AWS security and compliance documents,
    also known as audit artifacts.

  4. A. IAM uses a PARC access model.

  5. A. The AWS CloudTrail record digest uses SHA-256 for hashing.

  6. B. AWS accepts requests via an authenticated, online web form and via email.

  7. D. Authorization may be requested for a maximum of 90 days per request.

  8. C. AWS is responsible for maintaining Amazon VPC separation assurance; however,
    the customer is responsible for configuring subnets, security groups, NACLs, and other
    application-layer mechanisms appropriately.

  9. C. The AWS Organizations SCP is applied to member account root users in addition to
    IAM users.

  10. D. Amazon Inspector is an automated security assessment service that helps improve the
    security and compliance of applications deployed on AWS.


Chapter 16: Scenarios and Reference
Architectures

  1. A. Amazon Route 53 weighted policies provide the most control over how much traffic
    is directed to specific application resources. Failover policies would not support a gradual

    migration, and latency-based and geolocation policies offer limited administrative control
    about which requests get directed at specific application resources.

    500 Appendix Answers to Review Questions


  2. A. VPN connections typically reuse existing on-premises VPN equipment and Internet con-
    nections. AWS Direct Connect requires a new circuit to be provisioned. Options C and D
    are options for providing access to individual applications or AWS services, not for connect-
    ing networks.

  3. D. Amazon Route 53 geolocation policies provide the ability to direct users based on
    their geographic location, and are therefore the only way to direct customers to applica-
    tions based on their locality. Weighted and failover policies are indiscriminate of location.
    Latency-based policies operate based on end-user latency, which often is correlated to end-
    user location, but not always.

  4. A. AWS WAF can be integrated with Application Load Balancer for blocking IP addresses
    at scale. Network ACLs can deny traffic but not to the required scale. AWS Shield and
    Amazon VPC Private Link do not provide capabilities for denying network traffic.

  5. A. In this scenario, the network requirements for on-premises network connectivity are
    exceeding the network capacity of Amazon EC2 instances operating outside of a placement
    group. This requirement eliminates Options B and D. Option C provides a different inter-
    face for interacting with on-premises resources, but it does not reduce the amount of traffic
    that must traverse the network. AWS Direct Connect connections will allow on-premises
    connectivity to scale beyond individual Amazon EC2 instance network limitations, and
    AWS Direct Connect gateway will provide a similar experience as a transit VPC for all
    attached VPCs.

  6. C. Option A is not highly available. Option B disables cross-region traffic, which is not the
    desired outcome. Option D is not possible. This leaves option C as the best answer.

  7. B. Amazon Route 53 latency-based policies will route a request to the closest location
    based on client latency. Weighted and failover policies are indiscriminate of location.
    Geolocation is indiscriminate of end-user latency.

  8. C. AWS Direct Connect public Virtual Interfaces (VIF) support on-premises access to AWS
    APIs. All of the other options require additional infrastructure and configuration, which
    can introduce additional complexity and variability into the network design.

  9. C, D. Amazon Route 53 geolocation policies are suited for directing user traffic to
    location-specific services. Failover policies are useful for sending requests to a redundant,
    backup location in the event that the primary site fails its health checks.

  10. D. NAT Gateways provide a highly-scalable network egress option for Amazon EC2
    instances in private networks. Egress-only Internet Gateways provide IPv6 egress traffic.
    Neither transit VPCs nor Amazon EC2 NAT instances are as scalable as NAT Gateways.

  11. A. Replicating your users and permissions to a VPC peered shared services network is the
    only option that will reduce on-premises network traffic. All of the other options continue
    to send all authentication and authorization traffic to on-premises resources.


Index



A

A record, 162

AAAA record, 162

account-id Flow Log element, 43

ACLs (Access Control Lists), 6, 249–250

Amazon VPC, 18

network ACLs, 29

ACM (AWS Certificate Manager), Amazon
CloudFront and, 221

action Flow Log element, 44

Active Directory, hybrid deployment and,
367–368

alarms, networking monitoring, 327–328
Amazon AppStream 2.0, 347–348

Amazon CloudFront, 4, 5, 8, 9, 208

access logs, 222

ACM (AWS Certificate Manager), 221
activity monitoring, 449

AWS Lambda@Edge, 222–223

cache behavior, 215–216

website, 217

cache control, 210

CDN (Content Delivery Network), 240
content delivery, 210–213

distributions, 209, 240

deleting, 229

domain names, alternate, 219–220, 227–228

dynamic content, 215

HTTP/2 and, 216–217

edge locations, 209, 214

Field-Level Encryption, 223, 241

HTTPs, 220–221, 228

invalidating objects, 221–222

media streaming, 209

OAI (Origin Access Identity), 240
origin server, 209


origins, 210

private content, 217

Regional Edge Caches, 214–215

RTMP distributions, 218–219, 226–227

web distributions, 215, 226

Wowza Streaming Engine 4.2, 219

Amazon CloudTrail, 254

Amazon CloudWatch, 254, 256,

325–327, 444

activity monitoring, 446–447

alarms, 327–328

log stream, 255

malicious activity, 452

metric filters, 255–256, 330–331

text logs, 329–330

troubleshooting and, 400
Amazon CloudWatch Logs, 444

activity monitoring, 447–448

Amazon DynamoDB, 5, 37

endpoints, 62

Amazon EBS, optimized instances, 277
Amazon EC2 (Elastic Compute Cloud), 2,

7, 9

Amazon VPC and, 16
Availability Zones and, 3
exercises

connection test, 53–54

launching, 53–54
instance networking

Amazon EBS and, 277–278
DPDK, 279

enhanced networking, 278, 279

instance families, 276–277

NAT gateways, 278

network drivers, 278
operating system support, 279
placement groups, 277

502 Amazon ECS – Amazon VPC


multi-locations, 472–476

NAT gateways, 278

security and, 250–251

Amazon ECS, 349–350

Amazon EFS, 374

Amazon Elasticsearch Service, 256
Amazon EMR, 350–351

cluster creation, 359

cluster creation exercises, 359
Amazon GuardDuty, 252–253

Amazon Inspector, 253, 453
Amazon Kinesis Firehouse, 256–257
Amazon Kinesis Streams, 37

Amazon Linux AMI (Amazon Machine
Image), 32

Amazon Linux HVM (Hardware Virtual
Machine), 279

Amazon Macie, 253–254

Amazon RDS (Relational Database Service),
5, 351, 442

setup, 358

setup exercises, 358

Amazon Redshift, 352–353

cluster creation, 359

cluster creation exercises, 359

Amazon Route 53, 4, 5, 8, 9, 157, 168–169

anycast striping, 239–240

DNS (Domain Name System) Service,
170–171

domain registration, 169–170, 199–200

domain transfer, 170

ELB Sandwich configuration, 203–204
HAProxy instances, 203–204

health checking, 178–180

hosted zones, 171–172

A record, 202

record types, 172

routing policies, 172–173

failover policy, 174

geolocation policy, 174–176

geoproximity routing, 177–178

latency-based policy, 173–174
multivalue answer routing, 176
simple policy, 173

traffic flow, 176–177

weighted, 202–203

weighted policy, 173

shuffle sharding, 239

SLA (Service Level Agreement), 239
WRR (Weighted Round Robin),

470–471

Amazon S3 (Simple Storage Service), 5, 37,
240, 372–373

Amazon SNS (Simple Notification Services),
254–255

Amazon SQS (Simple Queue Service), 5
Amazon VPC, 7, 9, 10, 16–17

ACLs (Access Control Lists), 18, 29
Amazon EC2 and, 16

CIDR blocks, 19–20

customer gateways, 19, 35–36

DHCP (Dynamic Host Configuration
Protocol) option sets, 19, 42

DNS server, 19, 43

dual-stack mode, 17

EIGWs (Egress Only Internet Gateways),
19, 33–34

elastic network interfaces, 19,
41–42

endpoints, 19, 36–38, 58

endpoint policy, 59

security and, 59

Flow Logs, 19

account-id, 43

action, 44

bytes, 44

dstaddr, 44

dstport, 44

end, 44

interface-id, 44

log-status, 44

packets, 44

protocol, 44

srcaddr, 44

srcport, 44

start, 44

version, 43

Internet gateways, 19, 30–31

Amazon VPC (Amazon Virtual Private Cloud) – AWS Cloud services 503


IP addressing, 18

IPv4, 24–25

IPv6, 25–26

IPv4 address and, 16–17, 19–21

IPv6 address and, 16–17, 19–21
NAT (Network Address Translation)

gateways, 19, 32, 33

instances, 19, 32–33

network ACLs, 29

peering, 19, 38–40

AWS PrivateLink comparison, 67
transitive routing, 73–74

placement groups, 19, 40–41

resizing, 74–76

route tables, 18, 22–23

security benefits, 71–73

security groups, 18, 26–29

shared services, 69–70

size, 19

subnets, 18

Availability Zones, 19

VGWs (Virtual Private Gateways), 19,
35–36

VPNs (virtual private networks), 19,
35–36

Amazon VPC (Amazon Virtual Private
Cloud), 2, 306

DHCP (Dynamic Host Configuration
Protocol) server, 4

DNS server, 4

Endpoints, 5

instance metadata, 4

mapping service, 5

tenant isolation, 5
Amazon VPC Flow Logs, 444

activity monitoring, 448–449

analysis, 451–452

troubleshooting and, 401

Amazon Workspaces, 5, 346–347
hybrid deployment and, 371
setup, 357

setup exercises, 357

AMIs (Amazon Machine Image), 32–33,
104–105


anycast striping, 239–240

APIs (Application Programming Interfaces),
8, 210

logging, 444

APN (AWS Partner Network), 134–135
ARN (Amazon Resource Name), 38
ARP (Address Resolution Protocol), 5
ASN (Autonomous System Number), 95
audit reports, 438

authoritative DNS (Domain Name
System), 157

Availability Zones, 3, 10

AWS, access control, 439–441

AWS CloudFront distributions, 441–442

AWS Organizations, 441
AWS Certificate Manager, 242

AWS Certified Advanced Networking -
Specialty exam, 2

AWS Certified Solutions Architect -
Associate exam, 2

AWS CLI (Command Line Interface), 132
encryption and, 442

AWS Cloud Adoption Framework
(CAF), 235

AWS Cloud services, 10

Amazon AppStream 2.0, 347–348

Amazon CloudTrail, 254

Amazon CloudWatch, 254, 256

Amazon ECS, 349–350

Amazon Elasticsearch Service, 256
Amazon EMR, 350–351

Amazon Kinesis Firehouse, 256–257
Amazon RDS (Relational Database

Service), 351

Amazon Redshift, 352–353
Amazon SNS (Simple Notification

Services), 254–255

Amazon WorkSpaces, 346–347

AWS DMS, 351–352

AWS Elastic Beanstalk, 353–354
AWS Glue, 353

AWS Lambda, 254, 257, 348–349

IAM, 254, 256

Kibana, 256

504 AWS CloudFormation – AWS Organizations


resource configuration, 319–321

SSH, 255–256

VPC Flow Logs, 257

AWS CloudFormation, 236, 306

stacks, 236

templates, 236, 306

AWS CloudFront, distributions, access
control and, 441

AWS CloudTrail, 8, 444, 445

activity monitoring, 445

AWS CodeCommit, 307

AWS CodePipeline, 307

AWS Config, 444

activity monitoring, 445–446

troubleshooting and, 401

AWS DDoS Response Team (DRT), 450
AWS Direct Connect, 7, 9, 130

BFD (Bidirectional Forwarding
Detection), 130, 131, 144

BGP (Border Gateway Protocol) and, 130
BGP-4, 130

eBGP (External BGP), 130
iBGP (Internal BGP), 130

billing, 420

data transfer, 148–149

port hours, 147–148
connectivity

logical

Direct Connect Gateway, 139
virtual interfaces, 136–140

physical

Carrier Hotels, 131

CNFs (Carrier Neutral Facilities), 131
connection request, 132

cross-connects, 132–133

dedicated connections, 131

hosted connections, 135
LAGs (Link Aggregation

Groups), 134

LOA-CFA, 132

locations, 131

multiple connections, 133–134

partners, 134–135

provisioning process, 132

resilient

dual connections, 140–141, 143
single physical connection, 140,

141–142

encryption, 376–379

fees, 421

hybrid deployment, VPC endpoints,
375–376

VGWs and, 139

VIFs and, 139

hosted, 140

private, configuration, 143–144

public, configuration, 143

VLANs, 130
VPNs

Backup VPN, 144–145

over AWS Direct Connect, 145–147
AWS DMS (Data Migration Service),

351–352

AWS Elastic Beanstalk, 353–354
application creation, 358
application creation exercises, 358

AWS Glue, 353

AWS Hyperplane, 64

AWS IAM Policy Simulator, troubleshooting
and, 401

AWS Identity and Access Management, 401
AWS Internet of Things (IoT), 442

AWS KMS (Key Management
Service), 242

AWS Lambda, 254, 348–349

Amazon CloudWatch, 257

malicious activity, 452

AWS Lambda@Edge, 222–223

AWS Management Console, AWS Config
data, 446

AWS Marketplace, 104–106, 453

Amazon Inspector, 453

IDS/IPS (Intrusion Detection System/
Intrusion Prevention System),
452–453

SIEM (Security Information and Event
Management), 452

AWS Organizations, 235, 441

AWS Overview of Security Processes whitepaper – compliance 505


AWS Overview of Security Processes
whitepaper, 438

AWS PrivateLink, 36, 65–66

endpoints, 60

fees, 421

service consumers and, 68–69
service providers and, 68
VPC peering comparison, 67

AWS Risk and Compliance
whitepaper, 438

AWS Security Best Practices
whitepaper, 438

AWS Service Catalog, 237
launch constraints, 237

workflow, 237

AWS Shield, 8–10, 449

DDoS attacks, 245

AWS Shield Advanced, 450–451

AWS Trusted Advisor, troubleshooting,
401

AWS VPN CloudHub, 98–99

AWS WAF (Web Application Firewall), 8,


AWS Direct Connect, 130
BGP-4, 130

eBGP (External BGP), 130
iBGP (Internal BGP), 130

billing

data processing fees, 420
data transfer

Amazon CloudFront, 423

AWS Direct Connect VIFs, 424
inter-Availability Zone, 423

Internet, 423

intra-Availability Zone, 424
region to region, 423

region via public IP, 423
VPC peering, 424

VPN endpoints VGWs, 424
data transfer costs, 420

port-hour fees, 420

scenarios, 424–428

service fees, 420

bytes Flow Log element, 44

10, 242, 443, 452–453

geographic match, 244

IP addresses, 243

size constraints, 243–244

SQLi, 244

string match, 244

transformation, 243

XSS (cross-site scripting), 243
AWS-native tools, troubleshooting

Amazon CloudWatch, 400
Amazon VPC Flow Logs, 401
AWS Config, 401

AWS IAM Policy Simulator, 401
AWS Trusted Advisor, 401


B

bandwidth, 274

BFD (Bidirectional Forwarding Detection),
AWS Direct Connect, 130, 131

BGP (Border Gateway Protocol), 36

C

CAA (Certificate Authority Authorization)
record, 162

CDN (Content Delivery Network), 8,
208–209

Amazon CloudFront, 240

geolocation, 208

change sets, template changes, 318–319
CIDR (Classless Inter-Domain Routing)

block, 16–17, 97

client-to-site VPNs, 94, 111–113

CloudHub, 70

CMDB (Configuration Management
Database), 445

CNAME (Canonical Name) record, 162, 209
CNFs (Carrier Neutral Facilities), 131
compliance

AWS Risk and Compliance, 438

NDA (Non-Disclosure Agreement), 438
scoping and, 437

506 configuration snapshot – domain name registrars


configuration snapshot, 446

cross-account network permission, 76–77
cross-connects, 132

customer gateways, Amazon VPC, 19,
35–36


D

DaaS (Desktop as a Service), 346–347
data processing, 284

fees, 420

data transfer costs, 420
billing

Amazon CloudFront, 423

AWS Direct Connect VIFs, 424
inter-Availability Zone, 423

Internet, 423

intra-Availability Zone, 424
region to region, 423

region via public IP, 423
VPC peering, 424

VPN endpoints VGWs, 424

DDoS (Distributed Denial of Service), 8, 238
defense in depth, 234

design, VPNs (Virtual Private Networks)
L3 encryption, Amazon EC2, 114–115
L3 encryption, AWS Direct Connect, 115
multicast in Amazon VPC, 115

on-premises network to VPC, 114
transitive routing, 115–117

DHCP (Dynamic Host Configuration
Protocol) server, 4

option sets, Amazon VPC, 19, 42
DHE (Diffie-Hellman Encryption), 442
Direct Connect Gateway

VGWs and, 139

VIFs and, 139

DMVPN (Dynamic Multipoint Virtual
Private Network), 94

DNS (Domain Name System), 4, 43,

156–157, 163–164

Amazon DNS versus Route 53, 165

Amazon EC2 DNS resolver, 166–168
attributes, 165

authoritative, 157, 159

domain names, 158

registrars, 159

FQDN (Fully Qualified Domain Name),
158

hosts, 158

hybrid deployment and, 368
IP addresses, 158

Name Servers, 159

domain level, 160

resolving, 160–161

non-authoritative, 159

query flood, 450

resolution, 159–160
resource records

A, 162

AAAA, 162

CAA (Certificate Authority
Authorization), 162

CNAME (Canonical Name), 162
MX (Mail Exchange), 162

NAPTR (Name Authority Pointer),
162

NS (Name Server), 162
PTR (Pointer), 162

SOA (Start of Authority), 161
SPF (Sender Policy Framework),

162–163

SRV (Service), 163

TXT (Text), 163

Simple AD and, 166

SLD (Second-Level Domain), 157
split-horizon, 68

subdomains, 158

TLD (Top-Level Domain), 157
servers, 160

VPC peering, 165–166

zones, 159

DNS server, Amazon VPC, 19
documentation, exercises, 11
domain name registrars, 159

DPDK (Data Plan Development Kit) – enhanced networking 507


DPDK (Data Plan Development Kit), 279
dstaddr Flow Log element, 44

dstport Flow Log element, 44

dual-stack mode, Amazon VPC and, 17


E

ECDHE (Elliptic-Curve Diffie-Hellman
Encryption), 442

edge locations, 4, 5

Amazon CloudFront, 214

edge networking, 2

EIGWs (Egress Only Internet
Gateways), 22

Amazon VPC, 19, 33–34

Elastic IP address, 24–25

Elastic Load Balancing, 6, 9, 37, 180–181
Application Load Balancer, 183–184
Availability Zones, 180–181

Classic Load Balancer, 182–183
configuration, 200–201

connection draining, 190
cross-zone load balancing, 190
ELB sandwich, 192–193

fees

Application Load Balancer, 421
bandwidth, 422

Classic Load Balancer, 422
LCUs (Load Balancer Capacity

Units), 421

Network Load Balancer, 421
new connections, 422

rule evaluations, 422

health checks, 191–192
HTTPS Load Balancers, 187

Idle Connection Timeout, 189–190
internal load balancers, 187

Internet-Facing Load Balancer, 186–187
listeners, 187–188

rules, 188

load balancer comparison, 181–182
Network Load Balancer, 184–186


proxy protocol, 190

sticky sessions, 191

targets, 188–189

elastic network interfaces, 41–42, 346

Amazon VPC, 19

ENA (Elastic Network Adapter) driver, 278
encryption

Amazon CloudFront PoPs, 444
API calls, 442

API endpoints, 442

AWS WAF (Web Application
Firewall), 443

DHE (Diffie-Hellman Encryption), 442
ECDHE (Elliptic-Curve Diffie-Hellman

Encryption), 442

load balancers, 444

POODLE (Padding Oracle On
Downgraded Legacy Encryption), 442

in transit, 443–444
end Flow Log element, 44
endpoints

Amazon VPC, 19, 36–38

AWS PrivateLink, 60

gateway endpoints, 59–60

interface endpoints, 59–60

Kinesis, 65

site-to-site VPNs, 94

VPC, hybrid deployment, 375–376
enhanced networking

AWS Direct Connect and, 282
DPDK, 279

drivers, 278

DSCP (Differentiated Services Code
Point), 282–283

enabling, 279

flow performance, 281

instance bandwidth, 281

jumbo frames, 280

load balancer performance, 281
network I/O credit mechanism, 280
operating system support, 279

QoS (Quality of Service), 282–283
VPN performance, 282

508 Enterprise Accelerator – exercises


Enterprise Accelerator, 438
errors, templates

semantic, 314–315

validation, 314

ESP (Encapsulating Security Payload), 95
exercises

Amazon CloudFront
block IP address, 268
block requests, 268

distributions, 266–267

deleting, 229

domain names, alternate, 227–228
Field-Level Encryption, 223

HTTPs, 228

origin access identity, 267
RTMP distributions, 226–227

web distributions, 226
Amazon CloudWatch metrics, 298
Amazon EC2

connection test, 53–54

launching, 53–54

Amazon EMR cluster creation, 359
Amazon Inspector, 460

Amazon RDS (Relational Database
Service) setup, 358

Amazon Redshift cluster creation, 359
Amazon S3 access over AWS Direct

Connect, 390–391

Amazon VPC flow log inspection, 413
Amazon Workspaces setup, 357

AWS Artifact, 460–461

AWS CloudTrail, enabling encryption,
461–462

AWS Config, 462–463

AWS Elastic Beanstalk application
creation, 358

AWS Trusted Advisor, 414, 461
billing alarm creation, 430
budget confirmation, 430

Cost and Usage report, 431
documentation, 11

domain registration, 199–200

ELB configuration, 200–201

ELB Sandwich configuration, 203–204
encryption set up over AWS Direct

Connect, 391–392

enterprise shared services, 478–479
flow log setup, 412

gateway VPC endpoints, 80–82

HAProxy instances, 203–204

health monitoring, 339–340

hybrid Three-Tier web app, load balancer,
389–390

instance-to-instance connective
test, 413

jumbo frames, 296

LAGs, creating, 152

log file validation, 461–462

network security, 479–480
performance testing across Availability

Zones, 293–294

pipeline integration, 338–339

placement groups, 294–295

regions, 296–297

rollbacks, 336

routing policies, weighted, 202–203
stack updates, 334–335

static Amazon S3 website, 266
template creation, 334

template parameterization, 335–336

traceroute, 414

transit VPC global infrastructure,
392–393

version control, 337–338

VGW, detached, 123–124

VIF, private, 151

hosted, 152

IPv6 and, 151–152

VIF, public, 150–151

VPC (Virtual Private Cloud)
creating, 51

endpoint, 84

endpoint service, 82–83

Internet connection, 52

Flow Logs – Hypervisor 509


IPv4 CIDR ranges, 86–87
routing, 52

subnet creation, 51–52

transitive routing, 85–86

VPN (Virtual Private Network), 123–124
connections, 121–123

VPC connection via transit point,
124–125


F

Flow Logs, Amazon VPC, 19
account-id, 43

action, 44

bytes, 44

dstaddr, 44

dstport, 44

end, 44

interface-id, 44

log-status, 44

packets, 44

protocol, 44

srcaddr, 44

srcport, 44

start, 44

version, 43

FQDN (Fully Qualified Domain Name), 158


G

gateway VPC endpoints, 59–60

Amazon DynamoDB, 62

Amazon S3, 60–62

exercise, 80–82

remote network access, 62–63

routing table, 61

security, 63

generic TLD (Top-Level Domain), 157
geographic TLD (Top-Level Domain),

157–158


global infrastructure, 9, 10

Availability Zones, 3

edge locations, 4

Regions, 2–3

GRE (Generic Routing Encapsulation),
94

GuardDuty, 8

GUAs (Global Unicast Addresses), 16,
25–26


H

horizontal scaling, 108–110

HPC (High Performance Computing),
40, 283

HTTP flood/cache-busting attacks, 450
HTTPs, Amazon CloudFront and,

220–221

hybrid networking, 2
applications

Active Directory and, 367–368
Amazon Workspaces, 371
applications

Internet access, 375

storage access, 371–374

DNS and, 368

operations, 370

remote desktop application, 371
Three-Tier web application,

365–367

AWS Direct Connect
encryption, 376–379

VPC endpoints, 375–376

connectivity, 364–365

scenario, 468–471

transitive routing in, 379–380
architecture considerations,

380–383

VPC scenarios, 384–386

VPC endpoints, 375–376

Hypervisor, 5

510 IAM (Identity and Access Management) – malicious activity detection



I

IAM (Identity and Access Management), 254
Amazon CloudWatch, 256

policies, 38, 235

ICANN (Internet Corporation for Assigned
Names), 157

Identity and Access Management (IAM), 59
IDN (Internationalized Domain Names), 157
IDS (Intrusion Detection System), 452–453
IDS/IPS (Intrusion Detection System/

Intrusion Prevention System), 452–453
IKE (Internet Key Exchange), 96

implicit routers, 22

IPS (Intrusion Prevention System), 452–453
IPsec (Internet Protocol Security), 36

IPsec VPN, 94

IPv4, Amazon VPC, 24–25
IPv4 address, 5

Amazon VPC and, 16–17
IPv6 comparison, 17

subnets and, 19–21
IPv6, Amazon VPC, 25–26
IPv6 address

Amazon VPC and, 16–17
IPv4 comparison, 17

subnets and, 19–21

infrastructure as code, 306–307

stacks, creating, 307–310

templates, creating, 307–310

instance metadata, 23

instance networking, Amazon EC2
Amazon EBS and, 277–278
DPDK, 279

enhanced networking, 278, 279

instance families, 276–277

NAT gateways, 278

network drivers, 278
operating system support, 279
placement groups, 277

interface VPC endpoints, 59–60, 64

AWS PrivateLink, 65–66
interface-id Flow Log element, 44

Internet gateways, Amazon VPC, 19, 30–31
InterNIC (Network Information Center), 157
IP (Internet Protocol), 2

IP addressing
Amazon VPC

IPv4, 24–25

IPv6, 25–26

Amazon VPC, 18, 23–24

DNS (Domain Name System), 158
Elastic IP addresses, 24–25

NAT, disabling, 76

VPC resizing, 74–76

IP packet, 5

J

jitter, 275

jumbo frames, 276


K

Kibana, Amazon CloudWatch, 256
Kinesis endpoint, 65


L

LAGs (Link Aggregation Groups), 134
latency, 275

LLAs (Link-Local Addresses), 25–26
LOA-CFA (Letter of Authorization -

Connecting Facility Assignment), 132
logging, APIs, 444

log-status Flow Log element, 44


M

MAC (Media Access Control), 5
malicious activity detection

Amazon CloudWatch, 452

Amazon VPC Flow Logs, 451–452

mapping service – network performance 511


AWS Lambda, 452

AWS Marketplace, 453

Amazon Inspector, 453

IDS/IPS, 452–453

SIEM, 452

AWS Shield, 449

AWS Shield Advanced, 450–451

mapping service, 5

media streaming, Amazon CloudFront, 209
metadata, instance metadata, 23

metric filters, Amazon CloudWatch, 255–256
MPLS (Multiprotocol Label Switching), 135
MTU (Maximum Transmission Unit), 276
multi-location resiliency, 472–476

MX (Mail Exchange) record, 162


N

Name Servers, 159

domain level, 160

NAPTR (Name Authority Pointer)
record, 162

NAT (Network Address Translation), 98
billing and, 420

devices, 58
instances

Amazon VPC, 19

IP addresses and, elastic, 76
NAT gateways

Amazon VPC, 32

instances, 31–33

Amazon VPC, 19, 31–33

fees, 421
NAT instances

Amazon VPC, 32

Amazon VPC, 32–33

NAT-T (Network Address Translation
Traversal), 98

NDA (Non-Disclosure Agreement), 438
network ACLs, 29

network activity monitoring, 444–445

Amazon CloudFront, 449

Amazon CloudWatch, 446–447


Amazon CloudWatch Logs, 447–448

Amazon VPC Flow Logs, 448–449

AWS CloudTrail, 445

AWS Config, 445–446

Network Load Balancer, AWS PrivateLink
and, 66

network performance
backup, 284

bandwidth, 274

data processing, 284

data transfer, on-premises, 284–285
DPDK, 279

enhanced networking, 278, 279
AWS Direct Connect and, 282
DPDK, 279

drivers, 278

DSCP (Differentiated Services Code
Point), 282–283

enabling, 279

flow performance, 281

instance bandwidth, 281

jumbo frames, 280

load balancer performance, 281
network I/O credit mechanism, 280
operating system support, 279

QoS (Quality of Service), 282–283
VPN performance, 282

ingestion, 284
instance networking

Amazon EBS and, 277–278
instance families, 276–277

NAT gateways, 278

placement groups, 277

jitter, 275

jumbo frames, 276

latency, 275

MTU (Maximum Transmission
Unit), 276

network appliances, 285–286

network drivers, 278
operating system support, 279
packet loss, 275

packets per second, 276

512 network services – PTR (Pointer) record


testing

Amazon CloudWatch metrics,
286–288

methodology, 288–289

throughput, 275

network services, documentation, 11
networking

edge networking, 2

hybrid networking, 2
networking monitoring tools

alarms, 327–328

Amazon CloudWatch, 325–327

metric filters, 330–331

text logs, 329–330

health metrics, 325–327
networking services

Amazon CloudFront, 8

Amazon EC2, 7

Amazon Route 53, 8

Amazon VPC, 7

AWS Direct Connect, 7
AWS Shield, 8–9

AWS WAF, 8

Elastic Load Balancing, 6
GuardDuty, 8

networks, drivers, 278

NS (Name Server) record, 162
nslookup, troubleshooting, 399
NUMA (Non-Uniform Mapping

Access), 279


O

OSI (Open Systems Interconnection),
troubleshooting and, 398

ownership model, 439


P

packet captures, troubleshooting and, 399
packet loss, 275

packets Flow Log element, 44
packets per second, 276
parameters, templates, 315–318
PAT (Port Address Translation), 32
peering, Amazon VPC, 19, 38–40
penetration testing

authorization, 455–456

authorization scope, 454–455

exceptions, 454–455
performance optimization, enhanced

networking

AWS Direct Connect and, 282
DSCP (Differentiated Services Code

Point), 282–283

flow performance, 281

instance bandwidth, 281

jumbo frames, 280

load balancer performance, 281
network I/O credit mechanism, 280
QoS (Quality of Service), 282–283
VPN performance, 282

ping, troubleshooting and, 399
placement groups, 40–41

Amazon EC2, 277

Amazon VPC, 19

POODLE (Padding Oracle On Downgraded
Legacy Encryption), 442

port-hour fees, 420

AWS Direct Connect, 421
AWS PrivateLink, 421
Elastic Load Balancing

Application Load Balancer, 421
bandwidth, 422

Classic Load Balancer, 422
LCUs (Load Balancer Capacity

Units), 421

Network Load Balancer, 421
new connections, 422

rule evaluations, 422

NAT gateway, 421

VPN connections, 420–421
protocol Flow Log element, 44
PTR (Pointer) record, 162

real-time media – service locations 513



R

real-time media, VoIP (Voice over IP), 283
Regional Edge Caches, 214–215

Regions, 2–3, 10

Elastic Load Balancing, 246–247
route tables, 247–249

subnets, 247–249

VFR (Virtual Routing and Forwarding),
248

remote desktop, hybrid deployment and, 371
resource records, DNS

A, 162

AAAA, 162

CAA (Certificate Authority
Authorization), 162

CNAME (Canonical Name), 162
MX (Mail Exchange), 162

NAPTR (Name Authority Pointer), 162
NS (Name Server), 162

PTR (Pointer), 162

SOA (Start of Authority), 161

SPF (Sender Policy Framework), 162–163
SRV (Service), 163

TXT (Text), 163

risk, AWS Risk and Compliance, 438
route tables, Amazon VPC, 18, 22–23
routers, implicit routers, 22

routes, priorities, 23

routing tables, gateway VPC endpoints, 61
RTMP (Real-Time Messaging Protocol),

209, 283–284

RTP (Real-time Transport Protocol),
283–284


S

SA (Security Association), 96
scaling

horizontal, 108–110

vertical, 106–108

scenarios, hybrid development, 468–471


SCP (Service Control Policy), 235
security

Amazon GuardDuty, 252–253

Amazon Inspector, 253

Amazon Macie, 253–254
data flow

Amazon CloudFront, 240–241

Amazon Route 53, 238–240
AWS Certificate Manager, 242
AWS Lambda@Edge, 241–242

AWS Shield, 245–246

AWS WAF, 242–245

edge locations, 238, 242

Regions, 242, 246–252
defense in depth, 234
endpoints and, 59

gateway VPC endpoints, 63
governance

AWS CloudFormation, 236

AWS Organizations, 235
AWS Service Catalog, 237

stacks, 321–322

templates, 321–322

security groups, 249–250

Amazon VPC, 18, 26–29

semantic errors, 314–315

service consumers, 66

AWS PrivateLink and, 68–69

service fees, 420

AWS Direct Connect, 421
AWS PrivateLink, 421
Elastic Load Balancing

Application Load Balancer, 421
bandwidth, 422

Classic Load Balancer, 422
LCUs (Load Balancer Capacity

Units), 421

Network Load Balancer, 421
new connections, 422

rule evaluations, 422

NAT gateway, 421

VPN connections, 420–421

service locations, 6

514 service providers – threat modeling


service providers, 66

AWS PrivateLink and, 68
Shared Services VPC, 69–70
shuffle sharding, 239

SIEM (Security Information and Event
Management), 452

site-to-site VPNs, 94

Amazon EC2 as termination endpoint,
101–110

availability, 102–104

creation and, 104–106

monitoring, 106

performance, 106–110

redundancy, 102–104
termination endpoint, on-premises

networks, 110–112

third-party VPN devices, 111–112
VGW as termination endpoint, 95

ASN (Autonomous System Number), 95
availability, 96–97

AWS VPN CloudHub, 98–99
CIDR, 97

encryption domain access, 96
ESP (Encapsulating Security

Payload), 95

IKE (Internet Key Exchange), 96
monitoring, 101

NAT (Network Address Translation),
98

NAT-T (Network Address Translation
Traversal), 98

policy-based VPNs, 96

redundancy, 96–97

routing, 97–98

SA (Security Association), 96
security, 97

VPN creation, 100–101

SLA (Service Level Agreement), 239
SLD (Second-Level Domain), 157
SOA (Start of Authority) record, 161
solution testing, 289

SPF (Sender Policy Framework) record,
162–163

split-horizon, 68

srcaddr Flow Log element, 44
srcport Flow Log element, 44
SRV (Service) record, 163

SSL (Secure Sockets Layer), 442
stacks

creating, 307–310

deleting, resource retention, 319
dependencies, 310–313

security, 321–322

start Flow Log element, 44
subnets, Amazon VPC, 18
SYN flood attacks, 450


T

TCP (Transmission Control Protocol), 2
Telnet, troubleshooting and, 399
templates

approvals, 323–325

change sets, 318–319

CIDR (Classless Inter-Domain Routing),
307

continuous delivery, 322–323

pipelines, 323

creating, 307–310
errors

semantic, 314–315

validation, 314

parameters, 315–318

security, 321–322

version control, 322

tenant isolation, 5
testing performance

Amazon CloudWatch metrics, 286–287
AWS Direct Connect, 288

instance networking, 287
methodology

solution testing, 289

throughput testing, 288–289

threat modeling, 436

least privilege, 436
need to know, 437
separation of duty, 436

throughput – VPC Flow Logs 515


throughput, 275

throughput testing, 288–289
TLD (Top-Level Domain), 157

ICANN (Internet Corporation for
Assigned Names), 157

servers, 160

traceroute, troubleshooting and, 399
traffic, AWS Cloud services

Amazon CloudWatch, 256
Amazon Elasticsearch Service, 256
Amazon Kinesis Firehouse,

256–257

AWS Lambda, 257

IAM, 256

Kibana, 256

VPC Flow Logs, 257
transitive routing, 62

Amazon VPC, 73–74

hybrid deployment, VPC scenarios,
384–386

security benefits, 71–73

VGW, 70

troubleshooting

ACLs (access control lists), 405

AWS Cloud services connectivity, 407
AWS CloudFront connectivity, 407
AWS Direct Connect, 404

DNS (Domain Name System),
408–409

Elastic Load Balancing, 408

IKE (Internet Key Exchange) phase 1 and
2, 403

Internet connectivity, 402

methodology, 398

routing, 405–406

security groups, 404–405

service limits, 409
tools

AWS-native tools, 400–401

nslookup, 399

packet captures, 399

ping, 399

Telnet, 399

traceroute, 399


VPC peering connections, 406

VPN (Virtual Private Network), 402
TTL (Time to Live), 210

TXT (Text) record, 163


U

UDP (User Datagram Protocol), 33
reflection attacks, 450


V

validation errors, 314

version control, templates, 322
version Flow Log element, 43
vertical scaling, 106–108

VFR (Virtual Routing and Forwarding),
248

VGWs (Virtual Private Gateways), 22
Amazon VPC, 19, 35–36

gateway endpoints and, 62
transitive routing, 70

VIFs (Virtual Interfaces), 59
AWS Direct Connect, 136
hosted, 140

private, 138–139

configuration, 143–144

public, 137–138

configuration, 143

VLANs (virtual LANs), AWS Direct
Connect, 130

VPC (Virtual Private Cloud), 94
endpoint, exercise, 84

endpoint service, exercise, 82–83
IPv4 CIDR ranges, exercise, 86–87
transitive routing

exercise, 85–86

hybrid deployment, 384–386
VPC endpoints, hybrid deployment,

375–376

VPC Flow Logs, Amazon CloudWatch,
257

516 VPN connections – Wowza Streaming Engine 4.2


VPN connections, fees, 420–421

VPNs (Virtual Private Networks), 7, 94

Amazon VPC, 19, 35–36

AWS Direct Connect, Backup VPN,
144–145

AWS-managed, 119

billing and, 420

client-to-site, 94, 111–113
design patterns

L3 encryption, Amazon EC2, 114–115
L3 encryption, AWS Direct Connect,

115

multicast in Amazon VPC, 115
on-premises network to VPC, 114
transitive routing, 115–117

IPsec protocols, 94

policy-based, 96

site-to-site, 94–95

Amazon EC2 as termination endpoint,
101–110

endpoints, 94

termination endpoint, on-premises
networks, 110–112

VGW as termination endpoint, 95–101
termination options, 119

VGW, 119


W‑Z

WHOIS database, 157

Wowza Streaming Engine 4.2, 219

Comprehensive Online
Learning Environment

Register to gain one year of FREE access to the comprehensive online interactive
learning environment and test bank to help you study for your

AWS Certified Advanced Networking - Specialty exam.


The online test bank includes:

Go to http://www.wiley.com/go/sybextestprep to register and gain access to this
comprehensive study tool package.

imageimage

WILEY END USER LICENSE
AGREEMENT

Go to www.wiley.com/go/eula to access Wiley’s ebook
EULA.

success. And the final point we landed on was incident
response and the importance of having a clean room to
perform your investigations and data forensics.

Security is a 24/7, 365-day-a-year job. It’s never-ending.

When incidents occur, they should always be treated as
opportunities to improve the security of your environment.
Hopefully, by utilizing some of the advice in this section, you
will reduce the number of incidents you must respond to.


Questions

  1. Having a strong identity foundation is not necessary to a
    secure environment.

    1. True

    2. False

  2. Best practice dictates you should never delete the root
    user’s access keys.

    1. True

    2. False

  3. AWS CloudHSM integrates seamlessly with other AWS
    services.

    1. True

    2. False

  4. AWS Config is used as a service to help you configure
    your security standards.

    1. True

    2. False

  5. What layer of the OSI model does a NACL protect?

    1. Presentation

    2. Session

    3. Application

    4. Network

  6. Inside a VPC, it doesn’t matter if you use overlapping IP
    ranges for your subnets.

    1. True

    2. False

  7. Which of these services can you use to automate
    patches and event remediation? (Choose all that apply.)

    1. Amazon EC2

    2. AWS Config

    3. AWS KMS

    4. AWS Systems Manager

  8. Which standard focuses on HTTPS and TLS protocols?

    1. Least privilege

    2. Defense in depth

    3. Data at rest

    4. Data in transit

  9. Why is data classification important?

    1. It is a way to organize data cleanly.

    2. It is a way to categorize data based on level of
      sensitivity.

    3. It is a way to categorize data based on file size.

    4. It is a way to organize data based on least privilege.

  10. Why is it difficult to perform an investigation during an
    ongoing event?

    1. You don’t have the proper tools.

    2. There is too much chaos and movement.

    3. The environment is untrusted.

    4. There are too many people responding to the event.

Answers

  1. B. A strong identity foundation is critical to a secure
    architecture. Without it, you open your environment up
    to compromise or downtime.

  2. B. You should always delete the root user’s access keys
    and use an IAM user for day-to-day activities. This
    reduces the risk of root compromise and accidental
    account deletion.

  3. B. AWS KMS is the service that integrates seamlessly
    with other AWS services. AWS CloudHSM does not
    integrate with AWS services and must be used through
    applications only.

  4. B. AWS Config shows you a history of the configuration
    changes of your AWS resources and marks them
    COMPLIANT or NONCOMPLIANT based on rules and
    configurations you decide.

  5. D. NACLs work at the Network layer of the OSI model.
    They are Network Access Control Lists.

  6. B. You should avoid using overlapping IP ranges to
    ensure communications between networks don’t collide.

  7. B and D. AWS Config and AWS Systems Manager can be
    used to automate remediation and patching as they
    both can scan your resources for configuration changes
    and updates.

  8. D. Data in transit utilizes HTTPs and TLS protocols to
    secure data communications. Data at rest uses other
    forms of encryption.

  9. B. Data classification is a way to organize data based on
    the level of sensitivity. Based on the categorization, you
    determine the level of security controls needed to
    protect the data.

  10. C. It is difficult to perform a clean investigation in an
    untrusted environment. To ensure authenticity of data
    collected, the environment must be trusted to be clean
    and uncompromised.


    Additional Resources

benefits of each service option, what actions they can
perform, how they can assist with investigations, and how
they help with overall recovery of your environment. Each
service has its own benefits, so you will want to determine
based off use cases and scenarios which ones will work for
your unique environment. We covered options ranging from
monitors and logs, methods to parse and search logs, and
dashboards to track security findings from multiple services.
All are beneficial on their own or can be combined to create
a complete plan of action.

From there, we dove into different methods of identifying
security events. These can range from monitor or log
notifications, odd billing activity noticed on your account,
AWS partner contact, an outreach letter from AWS Security,
or even outside interaction to your site contact. All are
important options to track, and investigating every contact
is necessary to ensure compliance of your environment.

Lastly, we covered how you can determine the root cause
of your security events. By reviewing all available logs and
findings in your account and utilizing services like Detective,
you can quickly and easily investigate events. Abuse notices
provide a ton of information to point you in the right
direction to begin investigations on your instances and
resources that could potentially be compromised and cause
harm to other AWS customers. Determining the root cause
is the first step to allow you to mitigate the issue and repair
it so that it does not occur again.


Questions

  1. If AWS Abuse reaches out to you, which of these options
    is an incorrect action?

    1. Delete your access keys

    2. Change your passwords

    3. Stop communications with AWS

    4. Keep an open line of communication with AWS

    5. Remove all MFA devices from your account

  2. You have an application that functions as a web crawler.
    You’ve received an abuse notice from AWS. Which type
    of nonintentional abuse does this fall under?

    1. Compromised resource

    2. Secondary abuse

    3. False complaints

    4. Application function

  3. Once you have removed a threat from your network, you
    are using KMS to implement encryption across your AWS
    resources. Which phase of the incident response
    framework is this?

    1. Containment

    2. Recovery

    3. Investigation

    4. Eradication

  4. Which tool provides a consolidated view of logs like DNS
    logs and AWS CloudTrail logs?

    1. AWS Security Hub

    2. Amazon CloudWatch Logs

    3. Amazon Detective

    4. Amazon GuardDuty

  5. Amazon GuardDuty works seamlessly with which AWS
    services? (Choose two.)

    1. Amazon EC2

    2. AWS CloudTrail

    3. Amazon S3

    4. Amazon VPC flow logs

    5. Amazon CloudWatch

  6. Which of the following are valid threat purpose values
    for Amazon GuardDuty? (Choose three.)

    1. Policy

    2. Ideal

    3. Recon

    4. Cryptocurrency

    5. Virus

    6. Authorized access

  7. AWS Security Hub integrates smoothly with which AWS
    Service. (Choose two.)

    1. AWS Config

    2. Amazon GuardDuty

    3. Amazon Macie

    4. AWS KMS

  8. Why is it important to scan network logs?

    1. To keep an eye on what the employees on your
      network are doing.

    2. To ensure there are no dropped packets or high
      latency.

    3. To be alerted to unusual traffic entering and exiting
      your network as a potential security event.

    4. To know if access has been made to your private S3
      buckets.

  9. AWS Firewall Manager works with which AWS services?
    (Choose two.)

    1. AWS WAF

    2. AWS Shield Advanced

    3. AWS Shield Standard

    4. AWS WAF Classic

    5. AWS IAM


Answers

  1. C. Keeping the line of communication open is key to
    resolving the issue and reinstating account resources.
    You never want to stop communication with AWS as it
    can result in complete lockdown or termination of your
    account.

  2. D. This is an intended function of the application and a
    false positive. You still need to explain to AWS the
    function of your application and ensure it will not impact
    other users.

  3. B. Recovery is the phase in which you protect data after
    the threat has been removed. In recovery, you ensure
    any data affected is protected by encryption, policies,
    and remediations.

  4. D. Amazon GuardDuty is the service that consolidates
    logs from DNS logs, AWS CloudTrail logs, and Amazon
    VPC flow logs and showcases them in a single
    dashboard for viewing.

  5. B and D. GuardDuty scans CloudTrail logs, VPC flow
    logs, and DNS logs to determine findings. It does not
    scan application logs from EC2 or log groups from
    CloudWatch.

  6. A, C, and D. The other options are not valid threat
    purpose values per GuardDuty documentation:
    https://docs.aws.amazon.com/guardduty/latest/ug/guard
    duty_finding-format.html
    . Threat purposes describe
    primary purposes of a threat or potential attack.

  7. B and C. Security Hub integrates with GuardDuty,
    Inspector, Macie, IAM Access Analyzer, and Firewall

    Manager only. Information from these services is
    aggregated in the Security Hub dashboard for view.

  8. C. Scanning network logs allows you to be alerted to
    unusual traffic entering and exiting your network as a
    potential security event. This is important to try and
    catch unauthorized actors before they get more control
    or access to your resources.

  9. A, B, and D. Firewall Manager does not work with Shield
    Standard or IAM. It only works with Shield Advanced,
    WAF, WAF Classic, and VPC security groups. And you use
    it in an organization.


    Additional Resources


Chapter Review

In this chapter we discussed how automation of alerts and
remediation of security incidents helps speed up response
times and reduce downtime. We moved from there into
remediation of some common security incident notifications.
We discussed how to handle them, how to respond to them,
and how to remediate the affected resources. The most
common incidents discussed were AWS abuse notices,
compromised Amazon EC2 instances, and compromised
AWS access keys or security credentials.

After discussion of how to respond and remediate, we
moved into some security best practices to prevent these
incidents from occurring in the first place. The most
common revolve around securing AWS access keys, utilizing
MFA devices, and properly configuring Amazon EC2 security
groups. The ones not everyone knows about are utilizing
perfect forward secrecy with AWS ALBs, AWS API Gateway
throttling and caching abilities, and using AWS Systems
Manager to perform operational and security operations on
AWS resources. Each of these was discussed further with lab
exercises showcasing the abilities of each and how they can
be used to ensure security incidents do not happen in the
future. Remember, security incidents are bound to happen.
It’s how you prepare and respond that is most important.


Questions

  1. Which service does AWS WAF not integrate with?

    1. Application Load Balancer

    2. Network Load Balancer

    3. EC2

    4. CloudFront

  2. Which of the following AWS services can be used to
    mitigate a DDoS attack? (Choose all that apply.)
    A.CloudFront

    1. EC2

    2. Route 53

    3. VPC flow logs

    4. Elastic Load Balancing

  3. Which of the following ciphers provide perfect forward
    secrecy?

    1. DHE

    2. AES

    3. RC4

    4. PSK

    5. ECDHE

  4. You want to configure an SSL connection to your
    website. Which of these AWS services permits you to do
    so?

    1. EC2

    2. ACM

    3. EFS

    4. S3

  5. Which kind of attack is a botnet used for?

    1. SQL injection

    2. Man-in-the-middle

    3. DDoS

    4. Phishing

  6. You have accidentally uploaded your AWS access keys to
    GitHub. What should you do? (Choose all that apply.)
    A.

    Delete the access key that has been exposed

    1. Make your access key inactive

    2. Create a new SSH key pair

    3. Keep the access key but create a new secret access
      key
      E. Create new access key and secret access key

    F. Delete your SSH key pair

  7. If your Amazon EC2 instance is compromised, which of
    the following actions should you take? (Choose all that
    apply.)
    A. Immediately terminate the instance

    1. Isolate the instance

    2. Detach all volumes from the instance

    3. Create Snapshots of the instance

    4. Share it with another account

  8. Why is AWS API Gateway throttling helpful?

    1. Allows you to reject unauthorized requests

    2. Allows you to give permissions to access your APIs

    3. Helps prevent downtime in the event of a DDoS
      attack

    4. Reduces the possibility of a man-in-the-middle attack


Answers

  1. B. As of this book, AWS WAF does not integrate with
    AWS NLBs. It can only be used with Amazon CloudFront,
    Amazon API Gateway, Application Load Balancers, or
    AWS AppSync GraphQL.

  2. A, C, and E. Only Amazon CloudFront, Amazon Route
    53, and AWS ELBs can be used to mitigate a DDoS
    attack. Amazon CloudFront caches items and prevents
    traffic from directly hitting your servers if not necessary.
    Amazon Route 53 can use weighted routing and specific

    rules to direct traffic equally. And AWS ELBs can be used
    to spread load across multiple instances to reduce the
    load on a specific server, causing downtime.

  3. E. ECDHE is required for perfect forward secrecy. This
    algorithm is used to derive the session key that provides
    additional safeguards against eavesdropping on your
    encrypted data.

  4. B. ACM is the only option for generation of SSL/TLS
    certificates. (IAM only allows import of SSL certificates,
    not the creation of them.)
    5. C. Botnets are used to
    provide massive resources to perform DDoS attacks.
    The more resources, the harder hitting the DDoS attack
    can be.

  1. A, B, and E. You must disable, delete, and re-create
    AWS access keys. SSH keys are not part of this. By
    disabling and then deleting the access keys, you
    immediately remove all access they have to your
    resources.

  2. B and D. When dealing with a compromised instance,
    you need to isolate it from your network to prevent
    further compromise and create snapshots for
    investigative purposes.

  3. C. The throttling will help mitigate a large number of
    requests at once, preventing downtime in the event of a
    DDoS attack.


    Additional Resources

business-level category and has different metrics that can
be monitored using CloudWatch.

CloudWatch natively integrates with over 70 AWS services
to provide a comprehensive monitoring platform and can be
considered a huge metrics repository. From ingesting
metrics data to ingesting application and system logs from
both the cloud and on-premises servers, CloudWatch
provides you with detection capabilities that can further be
acted upon using alarms and CloudWatch Events for
remediation and processing.

A number of application monitoring features are provided
to help gain insights into application health and
performance. CloudWatch ServiceLens can help you get a
360-degree view of your application health and
performance, while Container Insights helps you monitor
your containers when running microservices-based
applications. Synthetic monitoring is enabled using
CloudWatch Synthetics, which helps you monitor APIs and
endpoints to meet your business SLAs.


Questions

  1. As a security engineer you must ensure that any EC2
    instance launched in your AWS account is done so using
    a company-approved, security-hardened AMI. Any EC2
    instance not complying with this should be
    automatically terminated and an e-mail sent to the
    operation team. What should you do to automate this?

    1. Use the AWS EC2 console and check the EC2
      instances running. If an EC2 instance is not using
      the approved AMI, terminate it.

    2. Create a CloudWatch Events rule that monitors EC2
      instance state changes, and configure a Lambda
      function that will terminate the noncompliant EC2

      instance. To enable notifications, configure an SNS
      topic with this rule.

    3. Use the AWS EC2 CLI to terminate the noncompliant
      EC2 instance.

    4. Set the Auto-Terminate option on the EC2 instance
      when a noncompliant AMI is used.

  2. You have been asked to investigate slowness in your
    application running on an EC2 instance. You use SSH to
    access the EC2 instance and find there’s a malware
    process causing a tremendous increase in CPU
    utilization. What could you have done to be proactively
    notified about this unusual CPU spike?

    1. You’ve done everything possible. There’s nothing you
      can do. Your EC2 instance has been compromised.
      Just terminate it.

    2. Use the HighCPUUtilization alarm provided by
      CloudWatch and configure your e-mail ID in the
      alarm.

    3. Enable antimalware software provided by AWS for
      your EC2 instance, and configure your e-mail ID in
      this software.

    4. Create a CloudWatch alarm to monitor EC2 CPU
      Utilization metric and configure an SNS topic to
      notify you when the CPU utilization is unusually high.

  3. You are running a Java-based website on an EC2
    instance and are seeing that the website has slowed
    down drastically. You monitor the various metrics of the
    EC2 instance and don’t see anything alarming. You use
    SSH to access the EC2 instance and check the website
    logs and see that a memory leak has occurred, which
    has consumed most of the instance memory. What
    actions should you take to make sure you are

    proactively notified when the EC2 instance is running
    out of memory? (Choose two.)

    1. Create a CloudWatch alarm for the MemoryUtilized
      metric provided by CloudWatch.

    2. Configure the memory monitoring scripts provided
      by AWS in your EC2 instance to publish memory
      utilization data points to CloudWatch as a custom
      metric named MemoryUtilization.

    3. Configure your applications to push memory
      utilization to CloudWatch and make a call to the SNS
      topic to notify you.

    4. Configure an alarm for the MemoryUtilization custom
      metric and assign a SNS topic for notification.

  4. You are charged with coming up with a monitoring
    strategy for a business-critical application deployed on
    an EC2 instance. This application is multithreaded, and
    so monitoring the number of threads in the thread pool
    is critical. Your developers inform you that the number
    of available threads needs to be monitored in small
    intervals and breaches beyond a threshold in subminute
    intervals are to be captured and alerted on. What should
    you do?

    1. Modify your application to capture the number of
      threads every second, store it locally on the EC2
      instance, and send this data to CloudWatch every
      minute.

    2. Use the enhanced monitoring metrics of CloudWatch
      and choose the NumberOfThreads metric to monitor.

    3. Publish the number of threads available to
      CloudWatch every second as a high-resolution
      metric data point using the PutMetricData API.

    4. Use the AWS-provided script for publishing
      information on threads in the thread pool. Send this

      to CloudWatch every 10 seconds.

  5. You have implemented various monitoring solutions for
    workloads running on AWS. Upon reviewing the detailed
    billing reports from AWS, you find that your costs for
    CloudWatch are greater than expected. You zero in on
    the fact that there are lots of calls being made to
    PutMetricData to support the monitoring of various
    custom metrics. What could you do to potentially reduce
    this cost without negatively affecting your monitoring
    strategy?

    1. Build a custom metric data-gathering system
      yourself and don’t rely on CloudWatch.

    2. Reduce the number of custom metrics created.

    3. Use high-resolution metrics calls by making calls to
      PutMetricData API.

    4. Use StatisticSet to reduce the number of calls made
      to PutMetricData API.

  6. Your development team has informed you that a few IAM
    policies are being modified constantly by someone on
    the team, which is resulting in disruption and causing
    service availability issues. You’d like to build a
    notification mechanism whenever an IAM policy is
    modified. What steps would help you to optimally
    achieve this? (Choose two.)

    1. Build a Cron job in an EC2 instance to check if the
      IAM policy is being called.

    2. Create a CloudWatch rule for the IAM API
      CreatePolicyVersion and configure the target to be a
      Lambda function to get details about the invocation.

    3. Configure your e-mail ID in the Cron job for
      notification.

    4. Configure a SNS topic with the CloudWatch rule.

  7. Your organization runs all their workloads on AWS. On
    any given day, there are more than 500 EC2 instances
    running. Developers are given sandbox AWS accounts
    for experimentation. As a result, you are seeing
    increased numbers of EC2 instances and EBS volumes
    being created. Upon further investigation, you find that
    many EBS volumes are not even attached to any EC2
    instances, and this is increasing your AWS costs. You
    decide to build automation to delete such volumes at
    the end of the day to save cost. What is the most
    optimal way to implement this?

    1. Write a script to get a list of all EBS volumes and run
      it on an EC2 instance by configuring it as a Cron job
      to run every day.

    2. Create a Cron-based CloudWatch schedule event rule
      to run clean up code in a Lambda function, which is
      configured as a target for the rule.

    3. Create a CloudWatch event rule to be triggered to
      the API GetNonAttachedEBVolumes. Attach a
      Lambda function to perform the cleanup.

    4. Set the Terminate Volume option for all EBS volumes
      created. This will automatically delete the detached
      volumes.

  8. You are part of a small start-up building products on
    AWS. Your team is made up of developers who love
    experimenting with various AWS services. You are
    conscious of spending on AWS and would like to be
    notified about the possible estimated spending in your
    account when it breaches a threshold. What can you use
    to achieve this?

    1. Create a CloudWatch billing alarm to monitor
      spending and send you notifications.

    2. AWS will automatically notify you if your billing
      exceeds a threshold. You don’t have to do anything.

    3. Call AWS Support and ask them to notify you when
      your billing exceeds the threshold.

    4. AWS provides a scheduled CloudWatch Events rule
      for this. Configure it to run every day.

  9. You are tasked with monitoring the health of several
    microservices your application depends on. Each
    microservice implements a health check service to
    indicate the health of the service. What service would
    you choose to implement this monitoring?

    1. Third-party monitoring system

    2. CloudWatch Synthetics

    3. CloudWatch Events

    4. CloudWatch ServiceLens

  10. You have created a CloudWatch CPUUtilization alarm for
    an EC2 instance and find that the alarm does not
    change to the ALARM status. What could be the possible
    reasons? (Choose two.)

    1. CloudWatch does not have enough data points to
      determine the state of the alarm.

    2. Upon evaluation, the CPU utilization has not
      breached the threshold.

    3. The number of data points the alarm was set was
      two out of five for a period of five minutes. However,
      only one data point breached the threshold.

    4. For the alarm to be in the ALARM status, detailed
      monitoring has to be enabled for this EC2 instance.


Answers

  1. B. Because you are required to automate the
    termination of an EC2 instance, you can create a
    CloudWatch Events rule that monitors EC2 instance

    state changes such as Pending, Running, etc. Configure
    a Lambda function with this rule, which will terminate
    the noncompliant EC2 instance. Remember that
    CloudWatch Events delivers a stream of events which
    describe changes to AWS Resources. So in this case,
    when an EC2 instance is launched, an event gets
    delivered to CloudWatch Events. And with a Lambda
    function configured as an Events rule target, you can
    inspect the event payload and the metadata of the EC2
    instance and terminate it programmatically.

  2. D. Monitoring CPU utilization using CloudWatch lets you
    proactively monitor for high or low CPU utilization and
    take action. You should also know that there is no EC2
    metric named HighCPUUtilization. Instead, you use
    CloudWatch to create an alarm on the CPUUtilization
    metric, configure the threshold for the alarm, and
    configure an SNS topic for notification which can send
    you an e-mail, send SMS, and take other actions.

  3. B and D. You should configure the memory monitoring
    scripts provided by AWS for your EC2 instance so you
    can publish them as a custom metric. You can then
    configure an alarm for the MemoryUtilization custom
    metric and assign a SNS topic for notification. Answer A
    is invalid because there is no EC2 metric named
    MemoryUtilized. Answer C is invalid because although
    your applications can publish memory utilization to
    CloudWatch, they should not make a call to SNS. This
    needs to be configured in the CloudWatch alarm.

  4. C. Since we are interested in subminute internals, we
    can publish the number of threads available to
    CloudWatch every second as a high-resolution metric
    data point using the PutMetricData API. Although answer
    A is technically feasible, it is not an optimal solution.
    Answer B is invalid because there is no metric named

    NumberOfThreads. Answer D is invalid because there is
    no AWS-provided script for publishing information on
    threads.

  5. D. When you make too many calls to publish custom
    metric data to CloudWatch, it will cost you. StatisticSet
    lets you collect metric data and aggregate them locally
    for many samples. In order to reduce the number of
    PutMetricData PI calls, we can use StatisticSet to send
    this data once every ten seconds, for example, instead
    of every second.

  6. B and D. Because we know the exact API call used to
    modify the IAM policy, we can create a CloudWatch rule
    for the IAM API call CreatePolicyVersion and configure
    the target to be a Lambda function to get details about
    the invocation. For notifications, we add a SNS topic to
    this CloudWatch Events rule. Answers A and C are not
    valid because they depend on Cron jobs, which is a
    highly nonoptimized solution.

  7. B. The key to this question is that the automation you
    build should delete EBS volumes at the end of the day.
    Instead of creating a Cron job on an EC2 instance, we
    can create a Cron-based CloudWatch schedule event
    rule to clean up the code in a Lambda function, which is
    configured as a target for the rule. This is much more
    optimal from a cost and operations perspective, since
    we avoid running an EC2 instance just to run a Cron job.

  8. A. You can create a CloudWatch billing alarm to monitor
    spending and send you notifications. None of the other
    answer options are valid; billing alarms are meant for
    the sole purpose of monitoring AWS charges.

  9. B. CloudWatch Synthetics provides you the ability to
    monitor the health of your endpoints. This enables you
    to discover issues before your customers do. Although

    answer A can be valid, it is not relevant because
    CloudWatch Synthetics provides this capability.

    CloudWatch Events and ServiceLens do not provide this
    capability.

  10. A and C. For the alarm to be in the ALARM status,
    CloudWatch should have enough data points to make an
    evaluation, and the number of data points configured
    should be beyond the threshold for the period
    configured.


    Additional Resources

With many AWS Security services focused on solving
security problems, it is also imperative that the security
findings be aggregated across accounts and provide you
with a report on the current security posture in a centralized
console. We’ve seen how AWS Security Hub solves this by
providing you with a single view.

Finally, you have seen how AWS Trusted Advisor helps
protect some of your AWS resources by following AWS best
practices. AWS Trusted Advisor is an application that draws
upon best practices learned from AWS’s aggregated
operational history of serving hundreds of thousands of AWS
customers. Trusted Advisor inspects your AWS environment
and makes recommendations for saving money, improving
system performance, or closing security gaps.


Questions

  1. Your company is running many applications on the AWS
    cloud. You have been tasked with getting a list of all
    resources being used. What is the easiest way to create
    a list of resource inventory?

    1. Make use of CloudTrail. It maintains a list of all
      resources and API calls.

    2. Use AWS Config. It maintains a list of resource
      inventory.

    3. Use the AWS EC2 CLI to create a list of resources.

    4. Get in touch with AWS Support and ask them to send
      you a list.

  2. You are a security engineer responsible for ensuring any
    EC2 instance launched uses a company-approved,
    security-hardened AMI with an encrypted EBS volume.
    EC2 instances not meeting these criteria are to be
    terminated. What is the optimal way of implementing
    this?

    1. Use CloudTrail to look for APIs regarding an EC2
      instance launch and stream the CloudTrail logs to
      Kinesis. Process Kinesis streams using a Lambda
      function to terminate the instance if it is found to be
      noncompliant.

    2. When EC2 is running, run an SSM automation script
      to terminate it if it finds that the instance is
      noncompliant.

    3. Monitor compliance with AWS Config rules triggered
      by configuration changes and configure a Lambda
      function to terminate the instance if it is found to be
      noncompliant.

    4. Use CloudWatch alarms to monitor for the metric
      AMIUsed and configure an SNS topic with the alarm
      to invoke a Lambda function that will terminate the
      instance if it is found to be noncompliant.

  3. Your company is running many web applications on the
    AWS cloud. You are getting reports from your customer
    support team that in the last few days there has been
    some downtime with two such web applications. Upon
    discussing with the development team, you discover
    that somebody or a process accidentally made changes
    to security groups attached to the web servers. You’d
    like to know who made these changes and when were
    they made. What would you do to investigate this?
    (Choose two.)

    1. Identity the security group whose configuration was
      changed. Review the change history of the security
      group to see when the change occurred and who did
      it.

    2. Use AWS Config and view the change history
      (configuration timeline). This will reveal the history
      of changes made to the security group.

    3. Use CloudTrail events to track who made the change
      to the security group.

    4. Set up an alarm for the Trusted Advisor check for
      security groups. When the alarm is triggered, you
      will receive an e-mail with when and who made the
      change to the security group.

  4. You are a part of an IT security team that is looking to
    perform security checks on S3 buckets to see if any of
    the bucket policies are not enforcing MFA. Your team has
    created a custom AWS Config rule for this purpose.
    What are the possible optimal ways this rule can be
    triggered? (Choose two.)

    1. Manually trigger the custom Config rule using the
      console or CLI.

    2. Configure the Config rule to be triggered whenever
      there is a change to an S3 bucket.

    3. Create a Cron job to trigger the Config rule every few
      minutes.

    4. Configure the custom Config rule to be triggered
      periodically, such as every 15 minutes.

  5. Your company has launched various EC2 instances for
    your company’s workloads. Some of these EC2
    instances are large and their per-hour cost is more. You
    have been tasked with monitoring these EC2 instances
    to detect Bitcoin mining, as this has occurred in the
    past. What’s the most optimal way of detecting
    cryptocurrency threats?

    1. Use AWS Inspector to monitor activities within an
      EC2 instance.

    2. Setup a CloudWatch alarm using the CloudWatch
      metric CryptoInAction.

    3. Enable Amazon GuardDuty in your AWS account.

    4. Enable AWS Config in your AWS account. Config
      automatically checks for cryptocurrency activity in
      your accounts.

  6. Your company runs all their workloads on AWS. As a
    result, they use multiple AWS accounts and multiple
    AWS regions. Your company uses many AWS Security
    services as well as third-party security products. You are
    tasked with creating a single view to gain visibility into
    the overall security posture of your workloads across
    accounts and regions. What is the easiest way to
    achieve this?

    1. Create an ELK stack and store all the security logs in
      AWS ElasticSearch. Use

      Kibana for visualization and alerting.

    2. Use CloudWatch dashboards to create a single view.

    3. Enable AWS Security Hub to create a single view.

    4. Use AWS Config to create a view.

  7. You have created a new AWS account and started to
    store some data in S3. You want to check if MFA is
    enabled for the root account and if S3 buckets grant
    global access. You’d like to keep the costs of using AWS
    low. Which tool can you use to get this information?

    1. Use the CloudWatch account and S3 metrics to
      obtain this information.

    2. Check the security category of Trusted Advisor.

    3. Configure CloudTrail, and the logs will give you this
      information.

    4. Enable GuardDuty in your AWS account to get this
      information.

  8. Your company is in the business of issuing student loans.
    Your CISO has asked that security controls be put in
    place to detect if loan information is accidentally

    uploaded to an S3 bucket. While the buckets are not
    publicly accessible and neither is the data, you’d like to
    implement a monitoring mechanism to identify if loan
    information exists in an S3 bucket. What approach
    would you take?

    1. Enable Macie and run a sensitive discovery job by
      configuring a custom data identifier for a loan
      number and student ID. This should flag the bucket
      if it contains loan information.

    2. Enable GuardDuty in your account. GuardDuty
      checks for sensitive data within your S3 buckets and
      notifies you.

    3. Enable Macie and run a sensitive discovery job.
      Macie uses the managed data identifier for a loan
      number and student ID to flag this bucket if it
      contains loan information.

    4. Use AWS Config rules to detect if the bucket contains
      loan information.


Answers

  1. B. The easiest way to get an AWS resource inventory is
    by enabling AWS Config. Using CloudTrail logs and AWS
    CLI is tedious and highly error prone. Increased cost
    could be another factor not to consider these options.

  2. C. You have been asked about the optimal way. The
    easiest option would be to use a Config rule. An AWS
    Config rule can be created that will be invoked when an
    EC2 instance’s status changes. The rule executes a
    Lambda function, which will terminate the instance if it
    is found to be noncompliant.

  3. B and C. In order to track all the changes made to the
    security group, including when the changes were made,
    use the AWS Config change history. To know who made

    the change, correlate it with the logs created by
    CloudTrail.

  4. B and D. You can check S3 bucket policies whenever
    there is a change to the S3 bucket or create a Config
    rule that checks S3 bucket policies periodically.

  5. C. By using GuardDuty, you can detect software that
    deals with cryptocurrencies.

  6. C. AWS Security Hub integrates with many AWS Security
    services and various Security partner products to
    provide a single view.

  7. B. Trusted Advisor provides various S3-related security
    checks for every AWS account at no extra cost.

  8. C. Amazon Macie provides you with the ability to create
    custom data identifiers, which can help you create regex
    expressions or keywords to find loan IDs, student IDs,
    etc.


    Additional Resources

Community-Based Source of Custom Rules for
AWS Config
https://github.com/awslabs/aws-config-
rules

agents. We reviewed the configuration details of a
CloudWatch agent and configured an EC2 instance to
publish instance memory metrics and failed SSH login
attempts to CloudWatch Logs to enable monitoring and
thereby enable notifications or perform further processing.

Finally, we introduced you to many of the AWS service
logging capabilities such as VPC logs, CloudFront logs, S3
access logs, and Elastic Load Balancer logs. We reviewed
the structure of these log files and also introduced you to
services such as CloudWatch Logs Insights and Athena for
searching and analyzing these log files.


Questions

  1. Your company has a hybrid cloud model and runs many
    applications in the AWS Cloud as well as on-premises.
    You’ve been asked to monitor your application logs for
    any security threat–related events. What steps would
    you take to implement this without changing your
    application code? (Choose two.)

    1. Use AWS CloudTrail to monitor your applications for
      security threats.

    2. Install a CloudWatch agent and configure it to send
      the application logs to CloudWatch Logs.

    3. Create a metric filter in CloudWatch Logs and a
      CloudWatch alarm to notify you when specific events
      occur.

    4. Run a daemon process in your servers that sends
      metric data to CloudWatch for monitoring.

  2. Your company’s CISO has been made aware that
    CloudTrail logs have been enabled in all of your AWS
    accounts. Your CISO has advised you to encrypt all the
    CloudTrail logs. What is the easiest way of achieving
    this?

    1. Set up an S3 Lambda trigger so that when a new
      CloudTrail log is delivered to the bucket, the Lambda
      function can encrypt the log file.

    2. There is nothing to do. CloudTrail logs are
      automatically encrypted by default.

    3. Assign a KMS CMK key when setting up the trail.

    4. Send CloudTrail logs to Amazon CloudWatch Logs to
      enable encryption.

  3. You have observed a sudden spike in daily spending
    within your AWS account. You suspect that certain API
    calls have provisioned many AWS resources, which has
    caused this hike in spending. You’d like to go back in
    time and inspect what API calls were made in your
    account over the past 15 days. What would you do?

    1. Use CloudWatch event history to get a list of all API
      calls made in the last 15 days.

    2. Send CloudTrail logs to CloudWatch Logs. Create a
      metric filter and alarm with a period of 15 days.

    3. Process the CloudTrail logs as they are delivered to
      an S3 bucket and index the API calls made into a
      DynamoDB table.

    4. Use Athena to query CloudTrail logs in the S3 bucket.

  4. You are receiving e-mails from CloudWatch that an EC2
    instance in a VPC is getting more inbound traffic than
    expected via the CloudWatch alarms created for the
    NetworkIn metric. You want to understand where this
    traffic is coming from. What’s the easiest way to analyze
    the traffic to get to the source?

    1. Install a threat detection agent on your EC2 instance,
      which will inspect all traffic and log it.

    2. Enable VPC flow logs for your VPC and analyze these
      logs to find the source of traffic.

    3. Run a daemon process in your EC2 instance to parse
      the incoming traffic and log the source IP.

    4. CloudTrail logs will have this information. Process
      these logs to find out the source IP.

  5. You’d like to run ad hoc queries against your CloudFront
    distribution logs to look for requests coming from bots.
    Which solution enables you to perform such analysis
    more efficiently?

    1. Redshift

    2. Athena

    3. CloudWatch Insights

    4. RDS

  6. Your company runs many compliance workloads such as
    PCI DSS, FedRAMP, etc., and uses many AWS accounts.
    You have enabled CloudFront, S3, and VPC flow logs
    along with CloudTrail to capture various events that
    occur in all your accounts. You plan to provide read-only
    access to these logs for the third-party auditor so they
    can audit for compliance. How can you help the auditor
    gain access to these logs?

    1. Create an S3 event notification with SNS and have
      SNS e-mail these logs to the auditor.

    2. Grant public read access on the S3 buckets with the
      log files.

    3. Use cross-account IAM roles in your centralized
      account, providing read-only access to specific S3
      folders containing log files. Share the ARN of the role
      with the auditor.

    4. Create an IAM user with read-only permissions to
      these resources and share it with the auditor.

  7. You have configured the CloudWatch agent in your EC2
    instances to deliver Apache access and error logs to

    CloudWatch Logs. After starting the agent, you find that
    no logs are being sent to CloudWatch Logs. What could
    be the possible reasons for this? (Choose all that apply.)

    1. The path of the Apache access and error log file in
      the agent configuration file is wrong.

    2. The IAM role attached to the EC2 instance does not
      grant permissions to CloudWatch Logs.

    3. Apache logs have a special format that is not
      supported by the CloudWatch agent.

    4. The EC2 instance is running in a private subnet, and
      a route to NAT gateway does not exist.

  8. A number of teams in your company are sharing
    documents with their customers by hosting these on S3
    and making them publicly readable. You’d like to
    understand which IPs are accessing these documents so
    you can compare those IPs with a threat list. How would
    you go about getting the source IP list?

    1. CloudTrail logs contain the source IP in them. Inspect
      these logs to get a list of source IPs.

    2. Enable S3 access logs and use Athena to query these
      logs for source IPs.

    3. Use VPC flow logs to find the source IPs.

    4. Use S3 metrics in CloudWatch to get the source IP
      when a request to the S3 object is made.

  9. Some users in your organization are complaining that
    they are unable to use SSH to access one or more EC2
    instances. Thankfully, you have VPC flow logs enabled.
    You inspect the logs and find that SSH traffic to five EC2
    instances has a REJECT status. What configurations may
    have resulted in this rejection? (Choose two.)

    1. The security groups inbound rule that allowed SSH
      traffic has been removed.

    2. NACL has denied outbound traffic.

    3. NACL has denied inbound traffic.

    4. Security group outbound traffic has been denied.

  10. Your company has enabled CloudTrail in all AWS
    accounts. Your security team is worried about the
    integrity and confidentiality of the logs. What would you
    do to handle these requirements?

    1. Create a new trail and register it with CloudWatch.
      Use CloudWatch alarms for integrity and
      confidentiality checks.

    2. Create a new CloudTrail trail to store logs. Use ACLs
      and MFA delete on the S3 bucket.

    3. Create a new trail. Enable MFA delete on the S3
      bucket. Enable log file integrity validation for
      CloudTrail.

    4. Create a new trail and configure SNS with the bucket
      to notify you whenever a CloudTrail is modified or
      deleted.


Answers

  1. B and C. Configuring the CloudWatch agent lets you
    send logs without having to change any application
    code. For monitoring, you can create metric filters and
    CloudWatch alarms.

  2. B. By default, CloudTrail logs are encrypted using S3’s
    server-side encryption.

  3. A. CloudTrail event history helps you troubleshoot
    operational and security incidents over the past 90 days
    in the CloudTrail console.

  4. B. VPC flow logs have a SourceIP (srcaddr) field.

  5. B. Athena can be used for running ad hoc operational
    queries on logs within S3.

  6. C. Establishing IAM roles is considered the best practice
    for enabling cross-account access.

  7. A, B, and D. Logs from EC2 instances may not be sent
    due to various reasons such as problems in the agent
    configuration file, the agent not granted permission to
    put data into CloudWatch Logs, or the agent is unable to
    reach out to the public CloudWatch endpoint due to
    network security/routing issues.

  8. B. S3 access logs have a Remote IP field in them
    indicating the source IP.

  9. A and C. Changes to inbound rules in security groups or
    inbound rules in NACLs can affect the inbound traffic.

  10. C. CloudTrail logs are encrypted by default. You can
    further secure them by enabling MFA delete on the S3
    bucket containing the log files and also enable log file
    integrity.


    Additional Resources

All of these metrics can have alarms created to alert you
when going over or under a set threshold for a specific
period of time.


Chapter Review

In this chapter we discussed many aspects of AWS KMS and
AWS CloudHSM. We covered key concepts, use cases,
management, and monitoring of both services. Each service
is unique, and its use cases are specific. Customers love the
“oohh-shiny” of AWS CloudHSM, but in most cases, AWS
KMS will meet their needs. It is easier to use, gives more
options for restricting access control, and works seamlessly
with other AWS services. When working with customers, it is
always a good idea to get their use case first before making
a suggestion on which encryption method to use.


Questions

  1. If you need to encrypt data that is less than 4KB, which
    would you choose?

    1. Public data key

    2. Symmetric data key

    3. Customer-managed CMK

    4. AWS-managed CMK

    5. AWS-owned CMK

    6. Private data key

  2. If you want to generate an AWS KMS asymmetric data
    key pair without the plaintext private key, which
    command would you use?

    1. GenerateDataKey

    2. GenerateDataKeyPair

    3. GenerateDataKeyWithoutPlaintext

    4. GenerateDataKeyPairWithoutPlaintext

  3. Which CMKs are shown in the AWS console? (Choose all
    that apply.)

    1. Customer-managed CMKs

    2. Customer-owned CMKs

    3. AWS-managed CMKs

    4. AWS-owned CMKs

  4. If user Tracy has an IAM policy with kms:* permissions
    but is not listed on any key policy as a user, what
    actions can she perform?

    1. Encrypt

    2. All KMS actions

    3. No KMS actions

    4. Decrypt

    5. CreateGrant

  5. Which portion of the default AWS KMS key policy is used
    to allow permissions to be controlled by AWS IAM
    policies specifically?

    1. Allow access for key administrators

    2. Allow use of the key

    3. Enable IAM user permissions

    4. Allow attachment of persistent resources

  6. If you import key material for use in AWS in three
    regions, us-east-1, us-west-2, and eu-west-1, then use
    the CMK in us-east-1 to encrypt some data client-side,
    which of the following regions can you decrypt that data
    in?

    1. eu-west-2

    2. eu-west-1

    3. us-west-2

    4. us-west-1

    5. us-east-1

  7. Which service does AWS KMS integrate with for a
    custom key store?

    1. Amazon EBS

    2. Amazon S3

    3. AWS Inspector

    4. Amazon CloudHSM

    5. AWS Certificate Manager

  8. If user Travis is listed on the key policy in the Key User
    section and has an IAM policy with kms:Encrypt and
    kms:Decrypt permissions, which of these actions can he
    perform? (Choose all that apply.)

    1. Encrypt

    2. ScheduleKeyDeletion

    3. Decrypt

    4. GenerateDataKey

    5. All KMS actions

    6. No KMS actions

  9. Using the JCE provider, which key store allows you to
    use your HSM with certificate-based operations?

    1. CloudHSM

    2. Cavium

    3. JCE

    4. Java

  10. On your CloudHSM cluster, you have a CU (Crypto User)
    named George. George wants to share his asymmetric
    key with Fred. Which of these users can perform the key
    share?

    1. Fred

    2. A CO (Crypto Officer)

    3. George

    4. The AU (Appliance User)

  11. Which of the following principals can zeroize an HSM?

    1. AU (Appliance User)

    2. CO (Crypto Officer)

    3. CU (Crypto User)

    4. Unauthenticated User

    5. All of the above

    6. None of the above


Answers

  1. C. Only customer-managed CMKs can be called directly
    with the AWS KMS API to encrypt objects less than 4KB.

  2. D. You can generate an asymmetric key pair without
    creating the plaintext private key using the
    GenerateDataKeyPairWithoutPlaintext API call.

  3. A and C. Only AWS-managed CMKs and customer-
    managed CMKs are shown in the console. AWS-owned
    CMKs are not shown, and customer-owned CMKs is not a
    thing.

  4. C. IAM permissions alone are not enough to grant access
    to KMS CMKs. Principals must be listed on the key policy
    as well.

  5. A. All these policies can be combined to restrict access
    to your CMKs.

  6. E. Data can only be decrypted in the same region it was
    encrypted in.

  7. D. AWS KMS custom key store allows you to use KMS in
    connection with an AWS CloudHSM cluster.

  8. A and C. User Travis will only have the permissions that
    overlap between the key policy and his IAM policy. It
    takes both policies to grant permissions to KMS CMKs.

  9. A. You need the CloudHSM key store to perform
    certificate-based operations.

  10. C. Only the key owner, in this case George, can share
    his key.

  11. E. Per docs found here
    (
    https://docs.aws.amazon.com/cloudhsm/latest/userguid
    e/hsm-users.html
    ), all user types can zeroize an HSM
    inside your CloudHSM cluster.


    Additional Resources

resources, but AWS ACM Private CA can be used to issue
certificates that can be exported for on-premises use as
well. We covered how to set up both options, request
certificates, set permissions necessary for use, and audit
usage. We covered specifics about which certificates are
exportable and which AWS services they can be used with.
We also covered some tidbits of information that will be
helpful when troubleshooting these services that are
common issues customers run into. Both of these services
are useful in securing and encrypting your data in transit.


Questions

  1. Which of these databases are supported by AWS Secrets
    Manager natively? (Choose all that apply.)

    1. Amazon Aurora

    2. ElastiCache

    3. MariaDB RDS

    4. Neptune

    5. Amazon QLDB

    6. Oracle RDS

  2. When a secret has completed rotation, what staging
    label is associated with the original secret?

    1. AWSPENDING

    2. AWSPREVIOUS

    3. AWSCURRENT

    4. AWSCORRECT

    5. AWSNEW

  3. When creating a secret, you can select from which of
    these secret types? (Choose all that apply.)

    1. Other types of secrets

    2. Credentials for a Redshift cluster

    3. Credentials for an Aurora cluster

    4. Credentials for an ElastiCache cluster

    5. Credentials for an RDS database

  4. AWS ACM will alert you via your Personal Health
    Dashboard if a certificate is about to expire how many
    days before it actually expires? (Choose all that apply.)

    1. 50

    2. 45

    3. 30

    4. 1

    5. 10

  5. Which of these certificates can AWS ACM automatically
    renew? (Choose all that apply.)

    1. Certificates issued by AWS ACM

    2. Certificates imported into AWS ACM

    3. Certificates issued by AWS ACM Private CA

    4. Certificates issued by an on-premises CA

  6. When using e-mail validation, what is the timeframe you
    have to validate the domain before the e-mail link
    expires?

    1. 12 hours

    2. 24 hours

    3. 36 hours

    4. 72 hours

  7. When creating your entire private CA hierarchy inside
    AWS, what is the first CA you create called?

    1. Public CA

    2. Root CA

    3. Private CA

    4. Subordinate CA

  8. Which of these key algorithms are supported by AWS
    ACM Private CA? (Choose all that apply.)

    1. EC-prime256v1

    2. RSA_2048

    3. AES_128

    4. ECDSA P-224

    5. RSA_4096

    6. EC_secp384r1

  9. Which API is used to create an end-entity certificate in
    AWS ACM Private CA?

    1. DescribeCertificate

    2. IssueCertificate

    3. GetCertificate

    4. RequestCertificate

  10. Revoking a root CA certificate will cause which other
    certificates to become null and void?

    1. Subordinate CA certificates

    2. Certificates issued by your subordinate CAs

    3. Certificates issued by your root CA

    4. All of the above


Answers

  1. A, C, and F. AWS Secrets Manager only supports Oracle
    RDS, MariaDB RDS, and Amazon Aurora out of these
    options.

  2. B. When the rotation is complete, AWS Secrets Manager
    adds the staging label AWSPREVIOUS to the new secret.

  3. A, B, and E. ElastiCache is not a supported service, and
    Aurora is covered under the credentials for an RDS
    database.

  4. B, C, and D. AWS ACM will alert you to expiration of
    your certificates via your PHD 45, 30, 15, 7, 3, and 1 day
    before expiration.

  5. A and C. AWS ACM can automatically renew certificates
    issued by the service. AWS ACM Private CA can
    automatically renew certificates issued by the service
    used to create subordinate CAs. It cannot renew end-
    entity certificates.

  6. D. You have 72 hours after creation before the e-mail
    validation link expires. You then must resend the link via
    the AWS ACM console or AWS CLI.

  7. B. When creating your entire PKI hierarchy inside AWS,
    the first CA you create is the root CA.

  8. A, B, E, and F. The correct key algorithms supported by
    AWS ACM Private CA are RSA_2048, RSA_4096,
    EC_prime256v1, and EC_secp384r1.

  9. B. IssueCertificate is the command to issue a certificate
    in AWS ACM Private CA. RequestCertificate issues a
    certificate from AWS ACM.

  10. D. Revoking a root CA’s certificate will cause all other
    subordinate CAs under it and all certificates issued by it
    to become null and void.


    Additional Resources

when either might be most useful for customers.
Understanding how they differ, the basic implementation of
each, and some of the operations and ways they work are
important to understanding the need for them.

We covered concepts, supported usage, supported
programming languages, and some how-to-use items. We
covered a bit about the prerequisites of each programming
language and how each offering worked with AWS services.
While not a high requirement, we hope you learned
something about the different encryption implementations
AWS has to offer.


Questions

  1. Which programming languages offered in the AWS
    Encryption SDK use keyrings? (Choose all that apply.)

    1. Java

    2. C

    3. Python

    4. JavaScript

    5. Ruby

  2. Which keyring type allows you to use many keyrings to
    perform encryption and decryption operations?

    1. Raw RSA keyrings

    2. Multi-keyrings

    3. Raw AES keyrings

    4. AWS KMS keyrings

  3. The AWS Encryption SDK uses envelope encryption on
    which set of information?

    1. Data keys

    2. Data

    3. Master keys

    4. Encrypted messages

  4. If an encryption key becomes compromised, what is the
    spread of affected data referred to as?

    1. Area of compromise

    2. Blast compromise

    3. Blast radius

    4. Area radius

  5. When setting cache security thresholds, which item(s)
    are required? (Choose all that apply.)

    1. Minimum age

    2. Maximum age

    3. Maximum messages encrypted

    4. Maximum bytes encrypted

  6. Which attribute actions are allowed under the Amazon
    DynamoDB Encryption Client? (Choose two.)

    1. Null

    2. Encrypt Only

    3. Sign Only

    4. Do Nothing

  7. Which fields are typically included in DynamoDB
    Encryption Context? (Choose all that apply.)

    1. Item name

    2. Partition key name

    3. Sort table name

    4. Attribute name-value pairs

    5. Requested material description

  8. Which fields are encrypted and signed?

    1. Items

    2. Attributes

    3. Values

    4. Keys

  9. When using the direct KMS provider, which items are
    saved in the actual material description? (Choose all
    that apply.)

    1. amzn-ddb-env-key

    2. amzn- sig-alg

    3. amzn-ddb-sig-alg

    4. amzn-wrap-alg

  10. To avoid signature validation errors in your table when
    removing an attribute, which is the proper method to
    modify your attribute actions?

    1. Remove the attribute item

    2. Fully deploy the attribute action first

    3. Do nothing

    4. You cannot remove an attribute


Answers

  1. B and D. C and JavaScript use keyrings. Python and Java
    use master key providers. Ruby is not an offering.

  2. B. Only multi-keyrings give you the ability to encrypt
    and decrypt items under many keyrings together.

  3. A. Envelope encryption is used to encrypt the data key
    itself. The plaintext data key is used to encrypt your
    data.

  4. C. When an encryption key becomes compromised, the
    spread of affected data is known as the blast radius.
    This is why you should use key caching carefully. The
    larger the data encrypted with the same key, the larger
    the blast radius if compromised.

  5. B. When setting the cache security options, only the
    maximum age is required. All other fields are optional.

  6. C and D. The attribute actions offered by the Amazon
    DynamoDB Encryption Client are Encrypt and Sign, Sign
    Only, or Do Nothing. Null and Encrypt Only are not
    options.

  7. B, D, and E. Partition key name, attribute name-value
    pairs, requested material description, table name, and
    sort key name are all typically included in DynamoDB
    Encryption Context. Item name and sort table name are
    not proper values.

  8. C. The Amazon DynamoDB Encryption Client encrypts
    only the values of attributes.

  9. A and C. The values saved are amzn-ddb-env-key,
    amzn-ddb-env-alg, amzn-ddb-sigalg, and amzn-ddb-
    wrap-alg. It will always follow that naming convention
    format.

  10. C. If you stop using an attribute, you do not have to
    change your attribute actions.


    Additional Resources

highest security to users around the world. Fundamental
concepts were explained, such as origin and behaviors, and
more detail was provided about security configurations that
every security architect should be aware of before using the
service. Lastly, a use case for CloudFront was presented and
a common configuration mistake that users make.

While Amazon CloudFront focuses on distributing content,
another need for modern applications is the usage of web
APIs to communicate with clients. That brings us to the
Amazon API Gateway service, which works as a gatekeeper
between clients and backends to provide features such as
authorization, throttling, request and response
transformation, and validation.

Now when you need to distribute requests across multiple
server instances or containers, the Elastic Load Balancer is
the AWS choice. ELB is a family of services, including the
Classic Load Balancer, the Application Load Balancer, and
the Network Load Balancer.

Amazon CloudFront, Amazon API Gateway, and the
Application Load Balancer can’t inspect the traffic directly
with custom rules; however, they can associate with the
AWS WAF service. This theme was explained in the “AWS
Web Application Firewall” section of this chapter.

Lastly, the chapter presented the AWS Shield service, an
essential tool in the toolbox to protect applications against
DDoS attacks.


Questions

  1. What’s the feature available on CloudFront that allows
    the service to access S3 bucket objects without having
    to configure the bucket as publicly available for
    everyone?

    1. Static website hosting

    2. Origin Access Identity

    3. Bucket policy

    4. Identity and Access Manager

    5. Lambda@Edge

  2. You need to make sure that the traffic exchanged
    between clients and the Application Load Balancer
    cannot be decrypted if an attacker gains access to the
    private keys. Which feature should be enabled in the
    security policy?

    1. TLS version 1.3

    2. DES cipher

    3. Forward secrecy

    4. AES256-SHA

    5. Listener rule

  3. Your organization will release a new set of microservices
    to replace a monolithic application hosted behind an
    Application Load Balancer. What’s the TLS extension
    name supported by the ALB that allows it to host
    multiple SSL certificates in a single Load Balancer?

    1. Security group

    2. Listener group

    3. Elliptic curve

    4. Target group

    5. SNI

  4. Your organization will release a new set of microservices
    to replace a monolithic application. It will use a REST API
    on Amazon API Gateway to authorize requests from
    clients before sending it to the integration backend. The
    client sends a bearer authentication token generated by
    a third-party identity provider with the request. Which
    authorization mechanism on the Amazon API Gateway is
    the best fit for this case?

    1. Cognito authentication

    2. IAM authentication

    3. Resource policy

    4. Lambda authorizer

    5. IAM policies

  5. You are in charge of the security of an application that is
    a frequent target of massive DDoS attacks, and your
    organization does not have specialists in this type of
    attack to respond during incidents. Which AWS Shield
    offering is the best choice for your case?

    1. AWS Shield Standard

    2. AWS Shield DDoS response team

    3. AWS Shield with WAF

    4. AWS Shield Basic

    5. AWS Shield Advanced

  6. Your API created on Amazon API Gateway is under an
    attack from a specific IP address. Every client that
    accesses your API is using the same API key. What
    method can you use to block requests from this
    particular IP?

    1. Reduce the throttling limit for the API method and
      resource from which the attacker is sending
      requests.

    2. Create a security group rule blocking the specific IP
      address and associate it with the API on API
      Gateway.

    3. Contact the AWS Support team to block the specific
      IP address on your API.

    4. Create a rule on AWS WAF containing the IP address
      that you want to block, associate the rule with a web
      ACL, and associate it with the API on API Gateway.

    5. It is not possible to block specific IP addresses on
      your API on API Gateway.

  7. Your company is hosting content in an S3 bucket and
    wants to provide access to paid subscribers through
    Amazon CloudFront. What native feature from Amazon
    CloudFront can restrict access to the content only to
    authorized clients?

    1. Lambda@Edge

    2. Signed cookies or signed URLs

    3. Origin Access Identity

    4. CloudFront Geo Restriction

    5. Origin protocol policy

  8. The application team is creating a new API that must be
    available only from clients inside a VPC. What
    authorization mechanism from API Gateway can you use
    to restrict access only from certain VPCs?

    1. Cognito authentication

    2. IAM authentication

    3. Resource policy

    4. Lambda authorizer

    5. IAM policies

  9. Is it true that Route 53 allows you to create private
    hosted zones for authoritative domains you own and
    associate the zones with VPCs in multiple regions?

    1. No, you can only associate a private hosted zone
      with a single VPC.

    2. You can associate a single private hosted zone with
      many VPCs in different AWS regions, and with
      different AWS accounts.

    3. No, although you can associate a private hosted zone
      with multiple VPCs, they all need to be in the same

      AWS region.

    4. You can associate a single private hosted zone with
      many VPCs in different AWS regions, however all the
      VPCs must be in the same AWs account.

  10. What methods can you use to reuse rules created on
    AWS WAF? (Choose two.)

    1. Web ACLs

    2. Security groups

    3. Rule groups

    4. Authorization groups

    5. Custom resources


Answers

  1. B. Origin Access Identity is a type of identity that is
    associated with a CloudFront distribution and can be
    configured in an S3 bucket policy to grant access to
    objects.

  2. C. Forward secrecy is a cryptographic feature that can
    be used by the TLS protocol to encrypt the traffic with
    ephemeral keys for every session established. By doing
    that, the attacker cannot replay or have access to the
    cleartext data of past communication even if he has
    access to the private key of a new session.

  3. E. SNI stands for Server Name Indicator and is an
    extension of the TLS that allows the client to indicate
    the domain of the server it is trying to establish access
    to. The server uses the information from the client to
    map and present the associated SSL certificate during
    the TLS handshake phase.

  4. D. By using the Lambda authorizer, you can implement
    a custom authorization method for API Gateway

    requests.

  5. E. AWS Shield Advanced provides customers 24/7
    access to a DDoS response team with in-depth and
    specialized knowledge of this type of attack.

  6. D. Create a rule on AWS WAF containing the IP address
    that you want to block, associate the rule with a web
    ACL, and associate it with the API on API Gateway.

  7. B. CloudFront can validate signed cookies or signed
    URLs created by an application, commonly granted after
    the user is authenticated.

  8. C. By using the API Gateway resource policy, you can
    allow access to requests coming from a particular VPC.

  9. B. AWS provides two types of hosted zones: private and
    public. Private hosted zones are associated with a VPC,
    and only resources inside an associated VPC can resolve
    the names. You can associate a private hosted zone with
    many VPCs in multiple AWS regions, and different AWS
    accounts.

  10. A and C. Web ACLs can be associated with more than
    one supported resource, and rule groups can be added
    to multiple web ACLs. Both of them can contain rules.


    Additional Resources

image


image

Table 11-2 Comparative Methods to Connect VPCs


Questions

  1. How many discrete physical locations does a single AWS
    region have?

    1. Each AWS region is a single physical location
      connected to the AWS backbone.

    2. One AWS region consists of at least two availability
      zones, each with one or more physical locations.

    3. One AWS region consists of at least three physical
      locations, each one in a different availability zone.

    4. One AWS region consists of four or more discrete
      physical locations. Each availability zone has at least
      two physical locations.

  2. You have two subnets in the same VPC, and you are
    using the default NACLs in each subnet, and the default
    security group is associated with each EC2 instance.
    Can EC2 instances communicate across with each
    other?

    1. No, even though the subnets are in the same VPC,
      you need to create a route in the route table
      associated with each subnet to allow them to
      communicate.

    2. Yes, all subnets in the same VPC can communicate
      with each other. The default NACL and default
      security group authorize access.

    3. The subnets have a route to each other; however,
      the default NACL or default security group does not
      allow the communication.

    4. EC2 instances in the same VPC can always
      communicate with each other.

  3. You have launched an EC2 instance in a private subnet
    with a public IP address, and you want to connect to the
    instance from the Internet using SSH. The instance is
    associated with a security group that contains an
    inbound rule allowing SSH. Are you able to access the
    instance from the Internet?

    1. Yes, you can connect to the EC2 instance using the
      associated public

      IP address, the security group is stateful, and it is already
      allowing inbound connections to the SSH port.

    2. No, because you need to create an outbound rule in
      the security group allowing ephemeral ports.

    3. No, because the instance is in a private subnet.

    4. No, AWS blocks any remote access to EC2 instances
      coming from the Internet.

  4. Your company created three VPCs in separate AWS
    regions: VPC A, VPC B, and VPC C. They established VPC
    peering between VPC A and VPC B and between VPC B
    and VPC C. Considering that routing, NACL, and security
    groups are correctly configured, can an EC2 instance

    located in VPC A access an EC2 instance situated in VPC
    C?

    1. Yes, VPC peering is transitive. Any VPC that has
      peering established can communicate with each
      other.

    2. No, VPC transitivity is only supported in the same
      AWS region.

    3. No, VPC peering is not transitive. For instances in
      VPC A to communicate with instances in VPC C, they
      need to have VPC peering between those two VPCs.

    4. No, although VPC peering can be transitive, you
      need to enable transitivity in each VPC peering
      connection.

  5. Which type of virtual private interface does a Direct
    Connect connection use to access publicly accessible
    service endpoints for AWS services such as S3 and
    DynamoDB?

    1. Private VIF

    2. Transit VIF

    3. Public VIF

    4. DX gateway

  6. On which segments of the network does AWS encrypt
    traffic? (Choose all that apply.)

    1. Customer gateway to the Direct Connect endpoint

    2. Direct Connect endpoint to the AWS region

    3. Inter-region traffic

    4. Traffic inside the AWS region

  7. Which methods can be used to establish a VPN to an
    Amazon VPC? (Choose all that apply.)

    1. AWS site-to-site VPN over the Internet

    2. AWS site-to-site VPN over a transit VIF to the transit
      gateway

    3. Software VPN over the Internet

    4. AWS site-to-site VPN over a Direct Connect public VIF

    5. Software VPN over a Direct Connect connection

  8. How many IPsec tunnels can stay up at the same time
    when you establish a site-to-site VPN connection using
    dynamic routing to the virtual private gateway?

    1. One IPsec tunnel

    2. Two IPsec tunnels

    3. Three IPsec tunnels

    4. Four IPsec tunnels

  9. What are the advantages of using a NAT gateway
    compared with a NAT instance?

    1. You can filter outbound traffic using the NAT gateway.

    2. A NAT gateway uses AWS Hyperplane technology to
      provide high availability and scalability.

    3. You pay only for the data-out traffic.

    4. You can associate a pool of public IP addresses to a
      single NAT gateway and balance the egress traffic
      using this pool.

  10. Which connection methods and services use a hub-and-
    spoke network model? (Choose all that apply.)

    1. VPC peering

    2. Transit VPC

    3. CloudHub

    4. AWS transit gateway

    5. AWS Direct Connect


Answers

  1. B. One AWS region consists of at least two availability
    zones, each with one or more physical locations.

  2. B. Subnets in the same VPC have a local route to each
    other, and this route cannot be deleted. The default
    NACL has an inbound and outbound rule allowing any
    access. The default security group comes with an
    inbound rule allowing access from the EC2 instances
    associated with the security group.

  3. C. Even though the EC2 instance has a public IP address
    associated with it, the private subnet by definition does
    not have a route to an Internet Gateway and it cannot
    receive inbound connections from the Internet.

  4. C. VPC peering connections are not transitive, and you
    cannot enable transitivity. To provide communication
    across all VPCs using VPC peering, you need to create a
    full mesh of VPC peering connections across the VPCs.

  5. C. Public VIFs are used to connect to public service
    endpoints using a Direct Connect connection.

  6. B and C. AWS encrypts the traffic between the Direct
    Connect endpoint where customers physically connect
    to AWS through the AWS region. Traffic across AWS
    regions is also encrypted.

  7. A, C, D, and E. The only incorrect option is AWS site-to-
    site VPN over a transit VIF to the transit gateway. When
    you create a VPN connection using the AWS site-to-site
    VPN service, AWS provisions two public endpoints. You
    cannot connect to the public endpoints by using a
    transit VIF.

  8. B. AWS site-to-site VPN creates two public endpoints to
    establish a VPN connection. When you are using static
    routing, the customer gateway can have only one route
    to the AWS endpoint per routing domain. Equal-cost
    multipath routing is supported only by site-to-site VPN

    connections to the transit gateway. Site-to-site VPNs
    using dynamic routing uses the BGP protocol to select
    the best path between the two IPsec tunnels, and both
    tunnels can stay operational all the time. The BGP
    protocol is responsible for automatically switching the
    routing path when one tunnel fails.

  9. B. A NAT gateway uses AWS Hyperplane technology to
    provide high availability and scalability. When you are
    using the NAT gateway, the egress traffic is not bounded
    to a single EC2 instance to perform the NAT task. AWS
    automatically scales the infrastructure used by the NAT
    gateway to handle the egress connections to the
    Internet.

  10. B, C, and D. VPC peering establishes a peer-to-peer
    connection between VPCs, and connections are not
    transitive. AWS Direct Connect establishes a point-to-
    point connection between on-premises networks and
    AWS.

  1. What are the different AWS networking layers that a
    request from a client on the Internet needs to pass
    through to reach an EC2 instance hosted in a public
    subnet?

    1. NAT Gateway, NACL, Security Group, Instance

    2. Internet Gateway, NACL, Security Group, Instance

    3. Internet Gateway, Security Group, NACL, Instance

    4. NACL, Internet Gateway, Security Group, Instance

    5. Internet Gateway, Security Group, NACL, Instance

  2. You have launched an EC2 instance in a public subnet
    hosting an HTTPS website. The security group
    associated with the instance has no outbound rules
    created and has an inbound rule allowing TCP 443 for
    the VPC CIDR. Will you be able to access the instance?

    1. Yes, the inbound rule allows HTTPS traffic (TCP 443).

    2. No, you should use outbound rules to allow the traffic
      coming from the Internet.

    3. No, even though the rule allows HTTPS, it only
      accepts requests from resources inside the VPC.

    4. Yes, any instance located in a public subnet is
      accessible from the Internet no matter the security
      group rules.

  3. You have been asked to troubleshoot why outbound
    traffic from an EC2 instance running in a private subnet
    is not reaching a host on the Internet. What possible
    options would you consider when troubleshooting this
    issue? (Choose two.)

    1. Check if the security group assigned to the EC2
      instance has both inbound and outbound rules that
      allow this traffic to flow through.

    2. Check if the NACL associated with the private subnet
      has both inbound and outbound rules that allow this

      traffic.

    3. Check if the EC2 instance has a public IP assigned to
      it.

    4. Check if a route to a NAT gateway exists in the route
      table attached to the private subnet.

  4. You have been asked to troubleshoot why outbound
    traffic from an EC2 instance running in a public subnet is
    not reaching a host on the Internet. What possible
    options would you consider when troubleshooting this
    issue? (Choose all that apply.)

    1. Check if the EC2 instance has a public IP assigned.

    2. Check if a route to the Internet Gateway exists in the
      route table attached to the public subnet.

    3. Check if a route to the customer gateway exists in
      the route table attached to the public subnet.

    4. Check if the NACL associated with the public subnet
      has both inbound and outbound rules that allow this
      traffic.

  5. You have been asked to troubleshoot why a CloudFront
    distribution is not able to access a private S3 bucket.
    What possible options would you consider when
    troubleshooting this issue? (Choose two.)

    1. The bucket policy is not allowing the CloudFront
      Origin Access Identity.

    2. The S3 bucket is configured to allow any public
      access.

    3. The origin in the CloudFront distribution has not been
      set to use an Origin Access Identity.

    4. The CloudFront distribution is not configured to use
      signed cookies or signed URLs.

    5. The CloudFront distribution is not configured to use
      the HTTPS protocol.

  6. Which AWS CLI command can you use to show the S3
    bucket policy?

    1. aws s3 get-bucket-policy

    2. aws s3 describe-bucket-policy

    3. aws s3api get-bucket-policy

    4. aws s3 describe-bucket-policy

  7. On which type of subnet should the NAT gateway be
    launched?

    1. A private subnet because it is used by hosts in a
      private subnet.

    2. A private subnet because it is more secure and you
      don’t need to expose the service over the Internet
      with a public IP.

    3. A public subnet because the NAT gateway requires a
      public IP and has a route to the Internet Gateway.

    4. The NAT gateway can only be launched in a public
      subnet.

  8. You need to configure your CloudFront distribution to
    distribute content for a static website using the domain
    name
    www.example.com, and your security team
    requires you to only accept HTTPS connections. Which
    configurations are mandatory to make it work? (Choose
    three.)

    1. Configure a CNAME for the domain name
      www.example.com in the

      CloudFront distribution.

    2. Create an S3 bucket using the name
      www.example.com to host the static content.

    3. Configure the CloudFront distribution to only accept
      HTTPS connections or redirect from HTTP to HTTPS.

    4. Create a public SSL certification for
      www.example.com in the AWS Certificate Manager.

    5. Enable caching in the CloudFront distribution.


Answers

  1. B. The traffic flow first reaches the Internet Gateway on
    AWS that translates an public IP to the private IP
    address, then is evaluated by the NACL inbound rule
    associated with the destination instance’s subnet, and
    next is checked against the instance’s security group
    inbound rules, and lastly can also be checked by a local
    firewall in the instance.

  2. C. The inbound rule should allow any source or the IP
    address from the client trying to access the instance
    using the HTTPS protocol.

  3. B and D. A is incorrect because the instance doesn’t
    need an inbound rule to allow egress communication to
    the Internet. C is incorrect because the instance doesn’t
    need a public IP, as it is in a private subnet and only
    needs egress access to the Internet.

  4. A, B, and D. C is incorrect because communication over
    the Internet doesn’t require a customer gateway unless
    you are doing traffic hairpinning over an on-premises
    router.

  5. A and C. B is incorrect because you don’t need to
    expose an S3 bucket publicly when you are using an
    Origin Access Identify. D and E are incorrect because
    HTTPS and signed cookies or URLs are not required to
    use an S3 bucket as the origin.

  6. C. The aws s3api namespace is used for control plane–
    related actions, for example, when you are managing
    your bucket. The aws s3 namespace is used for data
    plane actions, for example, when you are copying or
    deleting objects from a bucket.

  7. C. The NAT gateway should be launched in a public
    subnet so that it can be associated with an Elastic IP
    address and has a route to an Internet Gateway.

  8. A, C, and D. You don’t need to create an S3 bucket with
    the same name of the domain clients will use to access
    the website. Caching content is optional.

image


Figure 13-7 Starting a new session with your EC2 instance


Chapter Review

In this chapter you have learned about the tools provided by
AWS to increase the security of your applications based on
instances with AWS Systems Manager, Patch Manager, and
Session Manager for remote access. Other aspects of host
security include protecting the application and hosting in a
DevOps environment, where we add security to the pipeline
to implement tools to help build base images (AMIs) and
add security tests in this process to ensure the images have
the latest patches and don’t carry vulnerabilities to the
production environment by using automated vulnerability
tests from AWS Inspector and third-party tools.


Questions

  1. Which options do you have when selecting instances to
    apply patches to in the AWS Systems Manager when
    creating a patching configuration? (Choose all that
    apply.)

    1. Patch group

    2. Select instances manually

    3. Operating system

    4. Instance tags

  2. Your company is deploying a new application, and the
    CISO wants to ensure that all instances are patched
    before going to production. The developers have built a
    pipeline to build the application using AWS
    CodeCommit, CodePipeline, and CodeBuild. Which
    solution can you implement to ensure that the AMI used
    in the build process is up to date?

    1. Using a workflow process, you ask the developers to
      include a step to send an e-mail to you when a new
      application will be deployed, and you launch a new
      EC2 instance with your base image and run the
      patching process. After that you stop the instance
      and build a new AMI image that the developers can
      use in their pipeline and continue the process.

    2. You create an EC2 Image Builder pipeline with the OS
      used by the application and apply the patching and
      security tests to build the AMI. A successful AMI
      build sends an SNS message that triggers a Lambda
      function that updates a parameter store used by the
      developer team to reference the AMI ID.

    3. Let the developers add a stage in the AWS
      CodePipeline to build the application with AWS
      CodeBuild, and inside the CodeBuild they add the
      sudo yum update -y command. This will ensure that
      after building the application, the OS is updated.

    4. Import an image with VM import/export to AWS, and
      from this secure image you give the AMI ID to the
      developers to use this to build the application.

  3. Which service can be used to protect instances from
    application layer remote attacks and process protection
    and is scalable to support spikes in utilization while not
    becoming a bottleneck?

    1. AWS ELB

    2. VPC transit gateway

    3. Host-based intrusion prevention system

    4. Security groups

  4. Is it possible to remotely access an EC2 instance without
    direct network access using AWS Systems Manager?

    1. No, you always need to allow SSH or RDP to some
      external network where you can connect to the
      instance.

    2. Yes, you can restrict access to the EC2 instance only
      by the VPC and have another EC2 instance as the
      bastion host.

    3. Yes, you can configure AWS Systems Manager –
      Session Manager to access the EC2 instance through
      AWS endpoint even if the EC2 instance doesn’t have
      access externally.

    4. Yes, you can set up a VPN between an external
      network and the VPC.

  5. The networking team has created a private VPC with no
    direct Internet access. The only way to access this VPC
    is by Direct Connect that is connecting to the company
    data center; in addition, the CISO has requested to not
    allow remote access by SSH or RDP nor by Internet
    connectivity— the instances can only be accessed by
    AWS Session Manager. The instances are configured
    with the IAM role attached with the correct IAM policies,
    but it’s still not possible to connect to the EC2 instances.
    What could be the problem?

    1. You need to add an IGW (Internet Gateway) to the
      VPC.

    2. The instances are missing a security group allowing
      0.0.0.0/0 to HTTPS to all instances.

    3. The VPC is missing a VPC endpoint to the SSM
      services.

    4. The VPC is not configured with a NAT gateway or
      Internet Gateway. Add them to the VPC to allow
      communication with SSM.


Answers

  1. A, B, and D. The valid options are patch group, elect
    instances manually, and instance tags.

  2. B. C is incorrect, as you are not updating the AMI that
    will be used by the application. A and D are incorrect
    because these will be outdated over time and will take
    more and more time to be patched, which will be
    impractical in the long term.

  3. C. ELB doesn’t provide application layer inspection. WAF
    can add this protection, but is not part of the answer. D
    is a stateful firewall and doesn’t provide application
    layer protection.

  4. C. Using AWS Systems Manager – Session Manager the
    EC2 instance doesn’t need to have access to the
    Internet and the access to the EC2 instance can be done
    by the AWS Systems Manager service. The only
    requirement is that EC2 instance have an SSM policy
    and the VPC must have access to the SSM endpoint.

  5. C. A, B, and D do not meet CISO requirements or are
    incomplete.


    Additional Resources

can have; AWS Organizations starts with a default SCP
attached called FullAWSAccess that provides ALLOW access
from the member account, and if you want to change this
behavior you can add SCP policies to deny or remove the
FullAWSAccess and add more restrictive SCP policies. Some
AWS resources have policies to restrict access from the
resource side, and with S3 specifically there are features like
ACLs and Block Public Access to restrict access to buckets
and objects.

It is important to understand how IAM works to create and
test the service. IAM has no cost for creating IAM policies,
running AWS CLI commands to receive credentials, and
testing access. All are easy to do, and AWS provides a free
tier for some services, and you can test S3 buckets and
objects at low or no cost (AWS offers 12 months free for
some services).


Questions

  1. What is the default time expiration for a credential
    received by the STS AssumeRole request if the user has
    not changed the timeout in the role?

    1. 15 minutes

    2. 1 hour

    3. 24 hours

    4. 8 hours

  2. A company wants to integrate the existing Microsoft
    Active Directory with AWS to simplify authentication and
    access to multiple AWS accounts. What AWS service can
    provide integration with Microsoft Active Directory and
    centrally manage access to multiple AWS accounts?

    1. AWS Single Sign-On (SSO)

    2. AWS IAM users

    3. Amazon Cognito user pools

    4. AWS AD Connector

      image

      image


      Figure 14-15 AWS policy decision workflow


  3. A developer is creating a serverless application and is
    informed that his Lambda function is not working
    properly, as it is not showing any logs in CloudWatch
    and he can’t access the DynamoDB table. After some
    investigation you find the IAM role has this policy
    attached:


    image


    What IAM policy should you attach to the AWS Lambda
    role to solve this problem with the minimum privilege?

    image

    image


  4. You received a request to enforce that your company
    only use resources from the Ohio region (US-East-2) and
    you must apply this enforcement to all AWS accounts
    under your AWS Organizations. What is the best
    approach to meet this requirement?

    image



    image


    C. It currently is not possible to restrict regions in AWS.

    D. Configure AWS SSO and deny access based on the
    source IP.

  5. You are asked to deploy an EC2 instance to host a Java-
    based application that will access a DynamoDB table.
    Which of the following is the best secure way to
    configure this EC2 instance to access the DynamoDB
    table?

    1. Use KMS keys with the right permissions to the
      DynamoDB table and assign it to the EC2 instance.

    2. Create an IAM policy with a Deny All actions
      statement and another Allow statement only to this
      DynamoDB table and assign this IAM policy to an
      IAM user. Generate credentials to this IAM user and
      use those credentials inside the Java code.

    3. Create an IAM policy with permissions to access the
      DynamoDB table and associate this IAM policy with
      an IAM role that will be associated with the EC2
      instance.

    4. Use IAM groups with IAM policy permissions to
      access the DynamoDB table and associate it with
      the EC2 instance.

  6. Your company needs to provide files to external users,
    but InfoSec doesn’t allow you to make those files public
    without any authentication; if you are going to use
    authentication, each user has access to only specific
    objects. The IAM policy to run the application has this
    statement:


    image


    How can you create an application that uses the least
    privilege from the policy to allow only the objects the
    users will access?

    1. You can’t because the IAM policy allows getObject in
      all objects in the bucket and you can’t change this.

    2. Using an EC2 instance, you can invoke the AWS CLI
      and invoke the command presign to generate a
      presigned URL with the object that you want to
      return to the user.

    3. Create an EC2 instance and associate the IAM policy
      with the EC2 instance role, create a web app, and
      use Amazon Cognito to authenticate. In your app
      backend you create an API that receives an
      authenticated request from your user with the object
      and return a presigned URL with only the object of
      this user.

    4. Create an EC2 instance, create a web app, and use
      Amazon Cognito with the IAM role attached to the
      authenticated users with the IAM policy. When the
      users are authenticated in Cognito, you invoke the
      STS API AssumeRoleWithWebIdentity with a session
      policy to allow only the objects this user has
      permission to access.

  7. Your company started using AWS for experimentation,
    and after some time you have multiple AWS accounts
    spread over different lines of business, and each one
    has its own IAM users and no standard security controls
    are in place. The CISO asked you what solutions you can
    deploy to start having a centralized authentication
    mechanism where all accounts authenticate using the
    corporate AD. The CISO wants to apply standard
    security policies to all AWS accounts to comply with the
    corporate standards. What AWS service can help you
    with this? (Choose two.)

    1. AWS STS

    2. AWS SSO

    3. AWS Cognito

    4. AWS Organizations

  8. You are the admin of all AWS accounts in your company,
    and the dev team has asked to have more freedom in
    deploying their applications when they need to deploy
    services like Lambda and need to create IAM policies
    and IAM roles. Currently, you create those policies and
    roles and give to them the ARNs to be used in their
    CloudFormation templates to deploy. However, this is
    slow, and every time the policy needs to be updated,
    they have to send the request to you. What is the best
    option to give the dev team access to IAM to create IAM
    policies but restrict them to only a limited number of
    services and actions?

    1. Use SCPs to restrict which services and actions can
      be executed, apply this to a DEV OU, and add the
      AWS accounts the dev team is working to this OU.

    2. Add the dev team to your Admin group in IAM, and
      they can create any IAM policy and role.

    3. Create a permissions boundary policy with the
      services the dev team is allowed to use, and create
      an IAM policy giving permissions to create/update
      IAM policies and roles but with a condition that the
      permission boundary policy is attached to the IAM
      API actions, and attach the dev users to this policy.

    4. Create an IAM policy with permissions to the services
      the dev team needs to use in their Lambda, and
      give them IAM permissions to create, update, and
      attach IAM policies and IAM roles.


Answers

  1. B. AWS usually uses conservative values for credentials
    but 15 minutes can be considered too short for normal
    use and a more realistic default value is 1 hour. Use

    cases of 15 minutes credentials are more appropriate
    for applications where you need credentials for just one
    operation like a file upload or a report download. More
    information can be found here:
    https://amzn.to/2WwXSDZ

  2. A. AWS SSO provides the functionality asked for in the
    question with less effort than the other answer options.
    It can integrate with Active Directory and is more
    appropriate for a multiaccount strategy because, by
    default, AWS SSO can authenticate and redirect the user
    to accounts inside the AWS organization structure. AWS
    IAM can integrate with AD through AD Connector or
    Managed AD, but this approach is for a single AWS
    account; providing access to multiple accounts with
    AWS IAM is more involved than using AWS SSO. And
    Cognito is a service that is more appropriate for mobile
    or web application authentication.

  3. B. This is a tricky question as more than one answer can
    solve the issue. The Lambda function normally needs to
    have access to AWS CloudWatch service to generate the
    logs of every invocation and the function output. The
    other API actions required here are DynamoDB and S3.
    Looking to all the four IAM policies in the options, only
    two IAM policies have the required permissions which
    are B and D. The best choice is always the IAM policy
    that provides the minimum privileged level to
    accomplish the task. D can accomplish the same task,
    but B is more restrictive.

  4. B. Answer option A can be applied to IAM users but this
    job needs to be done in all AWS accounts and IAM users;
    for enterprise companies who utilized hundreds or
    thousands of AWS accounts it’s not practical. B is the
    right option because from the AWS Organizations
    management account you can create one SCP and apply

    to all AWS accounts under the root OU. C is incorrect
    because today it’s possible to apply the restriction
    based on the AWS region and D is not accomplishing the
    goal even if it is a valid solution.

  5. C. Answer option A is related to encryption only and
    doesn’t solve the request to give access to DynamoDB
    table. B is valid but using hardcoded credentials is not
    the best approach as EC2 instances support IAM roles. D
    is invalid because AWS IAM groups don’t provide
    credentials. The best answer here is C.

  6. C. D can generate the correct permission running on the
    client side, but this will make it easier for an attacker to
    realize that with the credentials after authentication, he
    can request a temporary credential to STS with all
    objects inside the bucket instead of just a specific prefix.

  7. B and D. AWS STS can also be used as part of the
    process, but doesn’t cover everything that is asked in
    the question. Cognito is a service targeted more to
    applications and not AWS authentication.

  8. C. You have a permission boundary that restricts the
    maximum permissions that a policy can have, and you
    add a condition in the dev users policy to allow them to
    create, update, or attach policies and roles by adding
    the permission boundary. A is incorrect because with
    SCP you can allow or deny actions, but you can’t create
    an SCP that allows CloudFormation to create IAM
    policies and roles and with just specific API actions. B
    will work, but the dev team will have more permissions
    than what is needed, as they have full admin
    permissions to create anything. D is incorrect because
    you just added the permissions the Lambda service
    needs and added permissions to create, update, and
    attach policies, which can allow the dev users to create
    any IAM policy with any privilege. Comparing all options,

    even if more than one answer can solve the problem,
    you must choose the one that is more secure or that
    provides the best approach.


    Additional Resources

    Many resources are available on the Internet about AWS
    identity and access management that you can use as a
    source of study or reference. Here is a list of some resources
    that you can use to better understand the concepts covered
    in this chapter.

https://amzn.to/3jisQtd

you can specify a value for the DurationSeconds
parameter. You can specify a value from 900 seconds
(15 minutes) up to the maximum session duration
setting for the role. If you specify a value higher than
this setting, the operation fails. For example, if you
specify a session duration of 12 hours but your
administrator set the maximum session duration to 6
hours, your operation fails. To learn how to view the
maximum value for your role, see View the Maximum
Session Duration Setting for a Role in the AWS
documentation link:
https://amzn.to/3etUhhV


Chapter Review

In this chapter you have learned how to troubleshoot the
authentication process on AWS and the tools available to
analyze the authentication and the common errors when
doing federation. Keep in mind the tools available and what
is possible to automate using CloudTrail, Access Analyzer,
and the native tools on AWS to detect and remediate these
problems.


Questions

  1. Your CISO has mandated that all software license keys
    for your application must be stored centrally, in an
    encrypted format, in SSM Parameter Store. It is now
    time to upgrade the software, and in order to get access
    to the upgrade, your application needs to access the
    license key string. You scheduled the upgrade for last
    weekend; however, most of the upgrades failed. What
    do you suspect the problem could be? (Choose two.)

    1. The EC2 instance role does not have permission to
      use KMS to decrypt the parameter.

    2. The EC2 instance role does not have permission to
      read the parameter in SSM Parameter Store.

    3. The EC2 instance role does not have permission to
      use KMS to encrypt the parameter.

    4. SSM Parameter Store does not have permission to
      use KMS to decrypt the parameter.

  2. When accessing specific AWS resources, you encounter
    some problems with permissions. What are possible
    reasons for those issues? (Choose two.)

    1. Since no permissions boundary or STS assume role
      policy exists, applicable permissions policies alone
      control access. These are checked together and
      always in the following order: identity-based policies
      first, then resource-based, and finally ACLs.

    2. You checked that you have sufficient permissions but
      then switched roles.

    3. Your request to a resource is implicitly denied
      because there is no explicit ALLOW statement in the
      permissions boundary policy for the applicable user
      or role.

    4. Your API request to the resource is denied because of
      an AWS Organizations permissions boundary defined
      by a service control policy, which has a relevant
      DENY statement in it. However, you are not a
      member of an account that is a member of that
      organization.

  3. You are trying to debug your Lambda function; however,
    you notice that you are not receiving log events from
    either Lambda or S3. What could be the reason for this?

    1. Your function does not have permission to write data
      events to CloudWatch, or your S3 bucket is not
      authorized to log data events to CloudWatch.

    2. Your function does not have permission to write data
      events and you need to enable cross-origin resource
      sharing to allow S3 to send data events to
      CloudTrail.

    3. You need to enable data events in CloudWatch.

    4. You need to enable data events in Lambda and S3.

  4. An engineer approached you, asking for help with an
    IAM policy that he has created but is not working
    correctly with his user. The IAM policy is this:


    image


    What is the problem with this IAM policy?

    1. The resource " arn:aws:cloudfront:*" is missing the
      region and account values.

    2. The resource " arn:aws:s3:::examplebucket" doesn’t
      apply to the actions.

    3. The resource " arn:aws:s3:::examplebucket" is
      missing the “/*” at the end of the ARN.

    4. You need to create two separate statements: one for
      CloudFront and another for S3.

  5. You are asked to create an IAM policy for a Lambda
    function with the minimum privilege to access the
    DynamoDB table myTable. What IAM policy best suits
    this request?

    image

    image



    image


  6. You want to ensure that all access to one of your S3
    buckets is encrypted with SSL/TLS. How can you
    accomplish this?

    1. Create an IAM policy with S3 actions and resource
      the S3 bucket with a condition of
      aws:SecureTranport to true, and apply this IAM
      policy to all users.

    2. Create an S3 bucket policy with permissions to any
      principal with a condition of aws:SecureTranport to
      true, and apply this bucket policy to the bucket that
      you want to protect.

    3. Create a CloudFront distribution and point all users to
      use this CloudFront with HTTPS to access the S3
      bucket.

    4. Enable the S3 bucket encryption with server-side
      encryption (SSE).

  7. Which of the following types of IAM policies can be
    created and administered by you and can be attached
    to multiple users, groups, or roles within your account?

    1. All IAM policies

    2. Customer-managed policies

    3. Inline policies

    4. AWS-managed policies

  8. How can you give an application running on EC2
    permission to read objects located in an S3 bucket?

    1. Create an IAM role with read access to the bucket
      and associate the role with the EC2 instance.

    2. Create an IAM user and associate this user with an
      in-line policy with read access to the S3 bucket,
      generate AccessKey/SecretKey credentials, and
      configure the EC2 instance with those credentials.

    3. Create an IAM group with read permissions to the S3
      bucket and associate the EC2 instance with this
      group.

    4. Create an AWS Transfer for FTP and point to the S3
      bucket.


Answers

  1. A and B. The only possible reasons for not being able to
    access the parameter store is because your EC2
    instance doesn’t have permission to read the parameter
    store and/or because you don’t have permission to
    decrypt the parameter as AWS Systems Manager
    Parameter Store use KMS CMK to encrypt and decrypt
    the parameter. C is incorrect because you are trying to
    retrieve the parameter and need to decrypt, not
    encrypt. D is incorrect because the parameter store will
    use the requester permission to decrypt.

  2. B and C. This is a tricky question because all answers
    look valid. But A is incorrect because the evaluation
    order is incorrect and there is no ACL evaluation, and D
    is incorrect because there is no permission boundary in
    SCPs.

  3. A. B and D are incorrect, and C is valid but not
    complete.

  4. B. Looking to the IAM policy, the S3 actions are all
    related to bucket-level API actions. The reason why this
    IAM policy is not working is because the S3 ARN is
    invalid. C looks valid, but if you change the ARN to
    objects inside the bucket, the IAM policy will still be
    invalid. The only correct answer here is B.

  5. C. Comparing all the IAM policies available, the most
    detailed in terms of actions and resources is C. All other
    IAM policies will be more open in terms of actions or in
    terms of resources.

  6. B. You might think D is the correct answer, but the
    question is asking for access to the S3 bucket and not
    encryption at rest. For protection of data in transit, you
    need to ensure that the access to your S3 bucket is
    encrypted.

  7. B. In-line policies are attached to a single user, group, or
    role. AWS-managed policies are administered by AWS
    and you can’t change them.

  8. A. All answers looks correct, but the most secure way to
    do this is A.


Additional Resources

You can find more information about IAM troubleshooting in
these links:

https://docs.aws.amazon.com/IAM/latest/UserGuide/tro
ubleshoot.html

https://docs.aws.amazon.com/kms/latest/developergui
de/policy-evaluation.html

https://docs.aws.amazon.com/awssupport/latest/user/tr

oubleshooting.html


APPENDIX A

Objective Map



Exam SCS-C01

image



image

image

image